url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4164
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4164/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4164/events
|
https://github.com/huggingface/datasets/pull/4164
| 1,203,661,346
|
PR_kwDODunzps42MfxX
| 4,164
|
Fix duplicate key in multi_news
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T18:48:24Z
| 2022-04-13T21:04:16Z
| 2022-04-13T20:58:02Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"merged_at": "2022-04-13T20:58:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164"
}
|
To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4164/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4213
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4213/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4213/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4213/events
|
https://github.com/huggingface/datasets/pull/4213
| 1,214,510,010
|
PR_kwDODunzps42uft_
| 4,213
|
ETT time series dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kashif",
"id": 8100,
"login": "kashif",
"node_id": "MDQ6VXNlcjgxMDA=",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"repos_url": "https://api.github.com/users/kashif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kashif"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!\r\n"
] | 2022-04-25T13:26:18Z
| 2022-05-05T12:19:21Z
| 2022-05-05T12:10:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4213",
"merged_at": "2022-05-05T12:10:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4213"
}
|
Ready for review.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4213/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4213/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2391
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2391/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2391/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2391/events
|
https://github.com/huggingface/datasets/issues/2391
| 898,128,099
|
MDU6SXNzdWU4OTgxMjgwOTk=
| 2,391
|
Missing original answers in kilt-TriviaQA
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ",
"I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: I’m not sure what happens when you map `lambda x: {'input': ...}` as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['original_answer']`) I implemented it with a regular function (not lambda), see below\r\n\r\n```py\r\ndef add_original_answer(x, trivia_qa, triviaqa_map):\r\n i = triviaqa_map[x['id']]\r\n x['output']['original_answer'] = trivia_qa['validation'][i]['answer']['value']\r\n return x\r\n```"
] | 2021-05-21T14:57:07Z
| 2021-06-14T17:29:11Z
| 2021-06-14T17:29:11Z
|
CONTRIBUTOR
| null | null | null |
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question.
However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`)
## How to fix
It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data
cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2391/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2391/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4968/events
|
https://github.com/huggingface/datasets/pull/4968
| 1,369,312,877
|
PR_kwDODunzps4-wKkw
| 4,968
|
Support streaming compguesswhat dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T05:42:24Z
| 2022-09-12T08:00:06Z
| 2022-09-12T07:58:06Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"merged_at": "2022-09-12T07:58:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968"
}
|
Support streaming `compguesswhat` dataset.
Fix #3191.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4968/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4681/events
|
https://github.com/huggingface/datasets/issues/4681
| 1,304,617,484
|
I_kwDODunzps5NwuIM
| 4,681
|
IndexError when loading ImageFolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2843485?v=4",
"events_url": "https://api.github.com/users/johko/events{/privacy}",
"followers_url": "https://api.github.com/users/johko/followers",
"following_url": "https://api.github.com/users/johko/following{/other_user}",
"gists_url": "https://api.github.com/users/johko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johko",
"id": 2843485,
"login": "johko",
"node_id": "MDQ6VXNlcjI4NDM0ODU=",
"organizations_url": "https://api.github.com/users/johko/orgs",
"received_events_url": "https://api.github.com/users/johko/received_events",
"repos_url": "https://api.github.com/users/johko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johko"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[
"Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/arrow_writer.py#L387) as `asdict(info)` calls `Features({..., \"label\": {'num_classes': 0, 'names': [], 'id': None, '_type': 'ClassLabel'}})`, which then calls `require_decoding` [here](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/features/features.py#L1516) on the dict value it does not expect.\r\n\r\nI see two ways to fix this:\r\n* custom `asdict` where `dict_factory` is also applied on the `dict` object itself besides dataclasses (the built-in implementation calls `type(dict_obj)` - this means we also need to fix `Features.to_dict` btw) \r\n* implement `DatasetInfo.to_dict` (though adding `to_dict` to a data class is a bit weird IMO)\r\n\r\n@lhoestq Which one of these approaches do you like more?\r\n",
"Small pref for the first option, it feels weird to know that `Features()` can be called with a dictionary of types defined as dictionaries instead of type instances."
] | 2022-07-14T10:57:55Z
| 2022-07-25T12:37:54Z
| 2022-07-25T12:37:54Z
|
NONE
| null | null | null |
## Describe the bug
Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).
## Steps to reproduce the bug
Put a csv file in a folder with images and load it:
```python
import datasets
datasets.load_dataset("imagefolder", data_dir=path/to/folder)
```
## Expected results
I would expect a better error message, like `Unsupported file` or even the dataset loader just ignoring every file that is not an image in that case.
## Actual results
Here is the whole traceback:
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.11.0-051100-generic-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4681/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4681/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1612/events
|
https://github.com/huggingface/datasets/pull/1612
| 771,558,160
|
MDExOlB1bGxSZXF1ZXN0NTQzMDQ3NjQ1
| 1,612
|
Adding wiki asp dataset as new PR
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7674948?v=4",
"events_url": "https://api.github.com/users/katnoria/events{/privacy}",
"followers_url": "https://api.github.com/users/katnoria/followers",
"following_url": "https://api.github.com/users/katnoria/following{/other_user}",
"gists_url": "https://api.github.com/users/katnoria/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/katnoria",
"id": 7674948,
"login": "katnoria",
"node_id": "MDQ6VXNlcjc2NzQ5NDg=",
"organizations_url": "https://api.github.com/users/katnoria/orgs",
"received_events_url": "https://api.github.com/users/katnoria/received_events",
"repos_url": "https://api.github.com/users/katnoria/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/katnoria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katnoria/subscriptions",
"type": "User",
"url": "https://api.github.com/users/katnoria"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-20T10:25:08Z
| 2020-12-21T14:13:33Z
| 2020-12-21T14:13:33Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1612.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1612",
"merged_at": "2020-12-21T14:13:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1612.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1612"
}
|
Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20/30KB.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1612/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1612/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2874
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2874/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2874/events
|
https://github.com/huggingface/datasets/pull/2874
| 989,685,328
|
MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4
| 2,874
|
Support streaming datasets that use pathlib
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```",
"@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... 😅 ",
"No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"
] | 2021-09-07T07:35:49Z
| 2021-09-07T18:25:22Z
| 2021-09-07T11:41:15Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2874",
"merged_at": "2021-09-07T11:41:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2874"
}
|
This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2874/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2874/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1175
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1175/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1175/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1175/events
|
https://github.com/huggingface/datasets/pull/1175
| 757,770,077
|
MDExOlB1bGxSZXF1ZXN0NTMzMDg0OTYy
| 1,175
|
added ReDial dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
}
|
[] |
closed
| false
| null |
[] | null |
[
"merging since the CI is fixed on master"
] | 2020-12-05T20:04:18Z
| 2020-12-07T13:21:43Z
| 2020-12-07T13:21:43Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1175.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1175",
"merged_at": "2020-12-07T13:21:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1175.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1175"
}
|
Updating README
Dataset link: https://redialdata.github.io/website/datasheet
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1175/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1175/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2221
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2221/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2221/events
|
https://github.com/huggingface/datasets/pull/2221
| 857,833,770
|
MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5
| 2,221
|
Add SLR70 - SLR80 and SLR86 to OpenSLR dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-04-14T12:09:18Z
| 2021-04-14T13:50:19Z
| 2021-04-14T13:50:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2221",
"merged_at": "2021-04-14T13:50:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2221"
}
|
I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are:
Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2221/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2221/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3850/events
|
https://github.com/huggingface/datasets/pull/3850
| 1,162,126,030
|
PR_kwDODunzps40FBx9
| 3,850
|
[feat] Add tqdm arguments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4",
"events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}",
"followers_url": "https://api.github.com/users/penguinwang96825/followers",
"following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}",
"gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/penguinwang96825",
"id": 28087825,
"login": "penguinwang96825",
"node_id": "MDQ6VXNlcjI4MDg3ODI1",
"organizations_url": "https://api.github.com/users/penguinwang96825/orgs",
"received_events_url": "https://api.github.com/users/penguinwang96825/received_events",
"repos_url": "https://api.github.com/users/penguinwang96825/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions",
"type": "User",
"url": "https://api.github.com/users/penguinwang96825"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-03-08T01:53:25Z
| 2022-12-16T05:34:07Z
| 2022-12-16T05:34:07Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3850",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3850"
}
|
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3850/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3850/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/169
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/169/comments
|
https://api.github.com/repos/huggingface/datasets/issues/169/events
|
https://github.com/huggingface/datasets/pull/169
| 621,099,682
|
MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw
| 169
|
Adding Qanta (Quizbowl) Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"events_url": "https://api.github.com/users/EntilZha/events{/privacy}",
"followers_url": "https://api.github.com/users/EntilZha/followers",
"following_url": "https://api.github.com/users/EntilZha/following{/other_user}",
"gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EntilZha",
"id": 1382460,
"login": "EntilZha",
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"organizations_url": "https://api.github.com/users/EntilZha/orgs",
"received_events_url": "https://api.github.com/users/EntilZha/received_events",
"repos_url": "https://api.github.com/users/EntilZha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EntilZha"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null |
[
"Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is correct following the instructions here: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset ? \r\n\r\nIf the tests described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset pass we can merge the PR :-) ",
"I updated to the most recent master and followed the steps, but still having the similar error where it can't find the correct file since the path to the directory is given, rather than the individual files within them. This still something wrong about how I'm inputting the data or how the tests are reading it?",
"It's the dummy_data structure. You actually have to call the dummy data file name `dummy_data` (not .json anything). So there should not be a `dummy_data` folder but for each config only a `dummy_data` which contains your json dummy data. Can you maybe try once more - if it doesn't work I do it for you :-). ",
"Would that work if there are multiple files? In my case, I'm including something similar to squad 1.0/2.0 where we have the main dataset + an additional challenge set in different files. Would I have the zip decompress to two files in that case?",
"This dataset was actually a special case. It helped us improve the dummy data instructions :-), see #195 .Close this PR and merge #194."
] | 2020-05-19T16:03:01Z
| 2020-05-26T12:52:31Z
| 2020-05-26T12:52:31Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/169"
}
|
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161
I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader.
```python
import nlp
# Default is full question
data = nlp.load_dataset('./datasets/qanta')
# Four configs
# Primarily useful for training
data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25')
# Primarily used in evaluation
data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25')
data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25')
# Primarily useful in evaluation and "live" play
data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25')
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/169/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/169/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3261
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3261/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3261/events
|
https://github.com/huggingface/datasets/issues/3261
| 1,052,346,381
|
I_kwDODunzps4-uYgN
| 3,261
|
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4",
"events_url": "https://api.github.com/users/lara-martin/events{/privacy}",
"followers_url": "https://api.github.com/users/lara-martin/followers",
"following_url": "https://api.github.com/users/lara-martin/following{/other_user}",
"gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lara-martin",
"id": 37913218,
"login": "lara-martin",
"node_id": "MDQ6VXNlcjM3OTEzMjE4",
"organizations_url": "https://api.github.com/users/lara-martin/orgs",
"received_events_url": "https://api.github.com/users/lara-martin/received_events",
"repos_url": "https://api.github.com/users/lara-martin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lara-martin"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```",
"It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n"
] | 2021-11-12T19:25:19Z
| 2021-12-21T10:24:10Z
| 2021-12-21T10:24:10Z
|
NONE
| null | null | null |
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance!
Am I the one who added this dataset? Yes
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3261/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4160
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4160/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4160/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4160/events
|
https://github.com/huggingface/datasets/issues/4160
| 1,202,845,874
|
I_kwDODunzps5Hsfiy
| 4,160
|
RGBA images not showing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cceyda",
"id": 15624271,
"login": "cceyda",
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"repos_url": "https://api.github.com/users/cceyda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cceyda"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
},
{
"color": "6C5FC0",
"default": false,
"description": "",
"id": 4030246674,
"name": "dataset-viewer-rgba-images",
"node_id": "LA_kwDODunzps7wOK8S",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-rgba-images"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null |
[
"Thanks for reporting. It's a known issue, and we hope to fix it soon.",
"Fixed, thanks!"
] | 2022-04-13T06:59:23Z
| 2022-06-21T16:43:11Z
| 2022-06-21T16:43:11Z
|
CONTRIBUTOR
| null | null | null |
## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent
[**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent)

Am I the one who added this dataset ? Yes
👉 More of a general issue of 'RGBA' png images not being supported
(the dataset itself is just for the huggan sprint and not that important, consider it just an example)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4160/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4160/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1258
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1258/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1258/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1258/events
|
https://github.com/huggingface/datasets/pull/1258
| 758,557,169
|
MDExOlB1bGxSZXF1ZXN0NTMzNzExOTQz
| 1,258
|
arXiv dataset added
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4",
"events_url": "https://api.github.com/users/tanmoyio/events{/privacy}",
"followers_url": "https://api.github.com/users/tanmoyio/followers",
"following_url": "https://api.github.com/users/tanmoyio/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanmoyio",
"id": 33005287,
"login": "tanmoyio",
"node_id": "MDQ6VXNlcjMzMDA1Mjg3",
"organizations_url": "https://api.github.com/users/tanmoyio/orgs",
"received_events_url": "https://api.github.com/users/tanmoyio/received_events",
"repos_url": "https://api.github.com/users/tanmoyio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanmoyio"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Need help"
] | 2020-12-07T14:23:33Z
| 2020-12-08T14:07:15Z
| 2020-12-08T14:07:15Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1258.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1258",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1258.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1258"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1258/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1258/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/5802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5802/events
|
https://github.com/huggingface/datasets/pull/5802
| 1,686,509,799
|
PR_kwDODunzps5PR199
| 5,802
|
Validate non-empty data_files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-27T09:51:36Z
| 2023-04-27T14:59:47Z
| 2023-04-27T14:51:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"merged_at": "2023-04-27T14:51:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802"
}
|
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5802/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4954
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4954/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4954/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4954/events
|
https://github.com/huggingface/datasets/pull/4954
| 1,366,369,682
|
PR_kwDODunzps4-mhl5
| 4,954
|
Pin TensorFlow temporarily
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T13:46:15Z
| 2022-09-08T14:12:33Z
| 2022-09-08T14:10:03Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4954",
"merged_at": "2022-09-08T14:10:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4954"
}
|
Temporarily fix TensorFlow until a permanent solution is found.
Related to:
- #4953
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4954/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4954/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5036
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5036/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5036/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5036/events
|
https://github.com/huggingface/datasets/pull/5036
| 1,389,094,075
|
PR_kwDODunzps4_w8Bs
| 5,036
|
Add oversampling strategy iterable datasets interleave
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T10:10:23Z
| 2022-09-30T12:30:48Z
| 2022-09-30T12:28:23Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"merged_at": "2022-09-30T12:28:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036"
}
|
Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
In order to be consistent and also to align with the `Dataset` behavior, please note that the behavior of the default strategy (`first_exhausted`) has been changed. Namely, it really stops when a dataset is out of samples whereas it used to stop when receiving the `StopIteration` error.
To give an example of the last note, with the following snippet:
```
>>> from tests.test_iterable_dataset import *
>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
>>> dataset = interleave_datasets([d1, d2, d3])
>>> [x["a"] for x in dataset]
```
The result here will then be `[10, 0, 11, 1, 2]` instead of `[10, 0, 11, 1, 2, 20, 12, 13]`.
I modified the behavior because I found it to be consistent with the under/oversampling approach and because it unified the undersampling and oversampling code, but I stay open to any suggestions.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5036/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5036/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4361
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4361/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4361/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4361/events
|
https://github.com/huggingface/datasets/issues/4361
| 1,238,671,931
|
I_kwDODunzps5J1KI7
| 4,361
|
`udhr` doesn't load, dataset checksum mismatch
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-05-17T13:47:09Z
| 2022-06-08T19:11:21Z
| 2022-06-08T19:11:21Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2273633,
"checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2107471,
"checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5"
}
}
```
size + checksum regenerated from current source files:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json
(hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s]
Dataset Infos file saved at dataset_infos.json
Test successful.
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2389690,
"checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2215441,
"checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe"
}
}
(hfdev) leon@blade:~/datasets/datasets/udhr$
```
--- is unicode.org a sustainable hosting solution for this dataset?
## Steps to reproduce the bug
```python
from datasets import load_dataset
udhr = load_dataset("udhr")
```
## Expected results
That a Dataset object containing the UDHR data will be returned.
## Actual results
```
>>> d = load_dataset('udhr')
Using custom data configuration default
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare
self._download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare
verify_checksums(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip']
>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7
- Platform: Linux Ubuntu 20.04
- Python version: 3.9.12
- PyArrow version: 8.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4361/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4361/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5563
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5563/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5563/events
|
https://github.com/huggingface/datasets/pull/5563
| 1,595,049,025
|
PR_kwDODunzps5KgtbL
| 5,563
|
Release: 2.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009437 / 0.011353 (-0.001916) | 0.004999 / 0.011008 (-0.006010) | 0.098839 / 0.038508 (0.060331) | 0.035496 / 0.023109 (0.012386) | 0.300726 / 0.275898 (0.024828) | 0.359793 / 0.323480 (0.036313) | 0.007694 / 0.007986 (-0.000292) | 0.003980 / 0.004328 (-0.000348) | 0.075240 / 0.004250 (0.070989) | 0.041149 / 0.037052 (0.004097) | 0.313185 / 0.258489 (0.054696) | 0.344111 / 0.293841 (0.050270) | 0.037775 / 0.128546 (-0.090772) | 0.011901 / 0.075646 (-0.063745) | 0.332631 / 0.419271 (-0.086641) | 0.047194 / 0.043533 (0.003661) | 0.306902 / 0.255139 (0.051763) | 0.321725 / 0.283200 (0.038525) | 0.101031 / 0.141683 (-0.040652) | 1.458778 / 1.452155 (0.006623) | 1.530196 / 1.492716 (0.037480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203241 / 0.018006 (0.185235) | 0.447147 / 0.000490 (0.446657) | 0.004159 / 0.000200 (0.003959) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025845 / 0.037411 (-0.011566) | 0.106966 / 0.014526 (0.092440) | 0.115876 / 0.176557 (-0.060681) | 0.179052 / 0.737135 (-0.558084) | 0.123012 / 0.296338 (-0.173327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.080400 / 2.077655 (2.002745) | 1.893747 / 1.504120 (0.389627) | 1.709389 / 1.541195 (0.168194) | 1.768071 / 1.468490 (0.299581) | 0.689717 / 4.584777 (-3.895059) | 3.760897 / 3.745712 (0.015185) | 2.017050 / 5.269862 (-3.252811) | 1.333027 / 4.565676 (-3.232650) | 0.083559 / 0.424275 (-0.340716) | 0.011951 / 0.007607 (0.004344) | 0.512313 / 0.226044 (0.286268) | 5.162696 / 2.268929 (2.893767) | 2.418559 / 55.444624 (-53.026065) | 2.110178 / 6.876477 (-4.766299) | 2.113635 / 2.142072 (-0.028437) | 0.835171 / 4.805227 (-3.970056) | 0.164222 / 6.500664 (-6.336442) | 0.061955 / 0.075469 (-0.013515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198336 / 1.841788 (-0.643452) | 14.531468 / 8.074308 (6.457160) | 13.882133 / 10.191392 (3.690741) | 0.154524 / 0.680424 (-0.525900) | 0.028782 / 0.534201 (-0.505419) | 0.441808 / 0.579283 (-0.137475) | 0.433096 / 0.434364 (-0.001268) | 0.518229 / 0.540337 (-0.022108) | 0.603201 / 1.386936 (-0.783735) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007385 / 0.011353 (-0.003967) | 0.005193 / 0.011008 (-0.005815) | 0.075517 / 0.038508 (0.037009) | 0.033192 / 0.023109 (0.010083) | 0.332299 / 0.275898 (0.056401) | 0.363043 / 0.323480 (0.039563) | 0.006368 / 0.007986 (-0.001617) | 0.004003 / 0.004328 (-0.000326) | 0.073710 / 0.004250 (0.069460) | 0.046916 / 0.037052 (0.009863) | 0.336307 / 0.258489 (0.077818) | 0.384910 / 0.293841 (0.091069) | 0.038132 / 0.128546 (-0.090414) | 0.012283 / 0.075646 (-0.063364) | 0.088036 / 0.419271 (-0.331235) | 0.049699 / 0.043533 (0.006166) | 0.333953 / 0.255139 (0.078814) | 0.352961 / 0.283200 (0.069762) | 0.101905 / 0.141683 (-0.039778) | 1.470480 / 1.452155 (0.018325) | 1.498212 / 1.492716 (0.005496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275067 / 0.018006 (0.257061) | 0.452589 / 0.000490 (0.452099) | 0.047067 / 0.000200 (0.046867) | 0.000983 / 0.000054 (0.000929) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028649 / 0.037411 (-0.008762) | 0.108385 / 0.014526 (0.093859) | 0.121213 / 0.176557 (-0.055343) | 0.192236 / 0.737135 (-0.544899) | 0.124620 / 0.296338 (-0.171719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428742 / 0.215209 (0.213533) | 4.264893 / 2.077655 (2.187238) | 2.061650 / 1.504120 (0.557530) | 1.873267 / 1.541195 (0.332072) | 1.961012 / 1.468490 (0.492522) | 0.708904 / 4.584777 (-3.875873) | 3.821289 / 3.745712 (0.075577) | 3.287231 / 5.269862 (-1.982631) | 1.903539 / 4.565676 (-2.662137) | 0.086474 / 0.424275 (-0.337801) | 0.012101 / 0.007607 (0.004494) | 0.531411 / 0.226044 (0.305367) | 5.216785 / 2.268929 (2.947857) | 2.575209 / 55.444624 (-52.869416) | 2.264902 / 6.876477 (-4.611574) | 2.291225 / 2.142072 (0.149153) | 0.853486 / 4.805227 (-3.951741) | 0.168550 / 6.500664 (-6.332114) | 0.064158 / 0.075469 (-0.011311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295830 / 1.841788 (-0.545958) | 14.419524 / 8.074308 (6.345216) | 13.397985 / 10.191392 (3.206593) | 0.181367 / 0.680424 (-0.499057) | 0.017666 / 0.534201 (-0.516535) | 0.420645 / 0.579283 (-0.158638) | 0.421025 / 0.434364 (-0.013339) | 0.527369 / 0.540337 (-0.012969) | 0.627175 / 1.386936 (-0.759761) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008717 / 0.011353 (-0.002635) | 0.004573 / 0.011008 (-0.006435) | 0.103660 / 0.038508 (0.065151) | 0.035274 / 0.023109 (0.012165) | 0.298563 / 0.275898 (0.022665) | 0.384397 / 0.323480 (0.060917) | 0.006932 / 0.007986 (-0.001053) | 0.003422 / 0.004328 (-0.000907) | 0.080193 / 0.004250 (0.075943) | 0.039767 / 0.037052 (0.002714) | 0.310296 / 0.258489 (0.051807) | 0.351361 / 0.293841 (0.057520) | 0.033532 / 0.128546 (-0.095014) | 0.011543 / 0.075646 (-0.064104) | 0.374816 / 0.419271 (-0.044456) | 0.046046 / 0.043533 (0.002513) | 0.306918 / 0.255139 (0.051779) | 0.382242 / 0.283200 (0.099042) | 0.098945 / 0.141683 (-0.042738) | 1.456929 / 1.452155 (0.004775) | 1.535763 / 1.492716 (0.043046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011759 / 0.018006 (-0.006247) | 0.405345 / 0.000490 (0.404855) | 0.002667 / 0.000200 (0.002467) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023924 / 0.037411 (-0.013487) | 0.095537 / 0.014526 (0.081011) | 0.106959 / 0.176557 (-0.069598) | 0.170782 / 0.737135 (-0.566353) | 0.109169 / 0.296338 (-0.187170) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437521 / 0.215209 (0.222312) | 4.383556 / 2.077655 (2.305902) | 2.092055 / 1.504120 (0.587935) | 1.889316 / 1.541195 (0.348121) | 1.937436 / 1.468490 (0.468946) | 0.700175 / 4.584777 (-3.884602) | 3.358107 / 3.745712 (-0.387605) | 3.243226 / 5.269862 (-2.026636) | 1.620497 / 4.565676 (-2.945180) | 0.083063 / 0.424275 (-0.341212) | 0.012970 / 0.007607 (0.005363) | 0.544226 / 0.226044 (0.318181) | 5.483315 / 2.268929 (3.214386) | 2.555183 / 55.444624 (-52.889441) | 2.204230 / 6.876477 (-4.672247) | 2.230551 / 2.142072 (0.088478) | 0.816121 / 4.805227 (-3.989106) | 0.151356 / 6.500664 (-6.349308) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208420 / 1.841788 (-0.633367) | 13.652597 / 8.074308 (5.578289) | 14.096318 / 10.191392 (3.904926) | 0.154473 / 0.680424 (-0.525951) | 0.028436 / 0.534201 (-0.505765) | 0.399949 / 0.579283 (-0.179334) | 0.398961 / 0.434364 (-0.035403) | 0.488703 / 0.540337 (-0.051634) | 0.572640 / 1.386936 (-0.814296) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004979) | 0.004368 / 0.011008 (-0.006640) | 0.076410 / 0.038508 (0.037902) | 0.027055 / 0.023109 (0.003945) | 0.336969 / 0.275898 (0.061071) | 0.374533 / 0.323480 (0.051053) | 0.004781 / 0.007986 (-0.003204) | 0.003317 / 0.004328 (-0.001011) | 0.076099 / 0.004250 (0.071849) | 0.038414 / 0.037052 (0.001361) | 0.339578 / 0.258489 (0.081089) | 0.384138 / 0.293841 (0.090297) | 0.031581 / 0.128546 (-0.096965) | 0.011666 / 0.075646 (-0.063981) | 0.085690 / 0.419271 (-0.333582) | 0.042277 / 0.043533 (-0.001256) | 0.337931 / 0.255139 (0.082792) | 0.365827 / 0.283200 (0.082628) | 0.088713 / 0.141683 (-0.052970) | 1.519789 / 1.452155 (0.067635) | 1.583097 / 1.492716 (0.090381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223472 / 0.018006 (0.205466) | 0.392474 / 0.000490 (0.391984) | 0.002739 / 0.000200 (0.002539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024373 / 0.037411 (-0.013038) | 0.099822 / 0.014526 (0.085296) | 0.106128 / 0.176557 (-0.070428) | 0.174688 / 0.737135 (-0.562447) | 0.112660 / 0.296338 (-0.183678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436317 / 0.215209 (0.221108) | 4.358277 / 2.077655 (2.280622) | 2.089746 / 1.504120 (0.585626) | 1.881040 / 1.541195 (0.339845) | 1.923653 / 1.468490 (0.455163) | 0.698176 / 4.584777 (-3.886601) | 3.346460 / 3.745712 (-0.399252) | 3.301429 / 5.269862 (-1.968433) | 1.391042 / 4.565676 (-3.174634) | 0.083025 / 0.424275 (-0.341250) | 0.012459 / 0.007607 (0.004851) | 0.533011 / 0.226044 (0.306967) | 5.334984 / 2.268929 (3.066056) | 2.534105 / 55.444624 (-52.910520) | 2.206295 / 6.876477 (-4.670181) | 2.231752 / 2.142072 (0.089680) | 0.798650 / 4.805227 (-4.006577) | 0.150070 / 6.500664 (-6.350594) | 0.066898 / 0.075469 (-0.008571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310527 / 1.841788 (-0.531261) | 13.920492 / 8.074308 (5.846184) | 13.359382 / 10.191392 (3.167990) | 0.154561 / 0.680424 (-0.525863) | 0.016387 / 0.534201 (-0.517814) | 0.379892 / 0.579283 (-0.199391) | 0.376746 / 0.434364 (-0.057618) | 0.462606 / 0.540337 (-0.077732) | 0.550895 / 1.386936 (-0.836041) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009373 / 0.011353 (-0.001980) | 0.005212 / 0.011008 (-0.005797) | 0.099287 / 0.038508 (0.060779) | 0.035175 / 0.023109 (0.012066) | 0.307012 / 0.275898 (0.031114) | 0.335105 / 0.323480 (0.011625) | 0.008006 / 0.007986 (0.000020) | 0.004017 / 0.004328 (-0.000311) | 0.075519 / 0.004250 (0.071269) | 0.040276 / 0.037052 (0.003223) | 0.302615 / 0.258489 (0.044126) | 0.361742 / 0.293841 (0.067901) | 0.038773 / 0.128546 (-0.089773) | 0.011892 / 0.075646 (-0.063754) | 0.334199 / 0.419271 (-0.085073) | 0.048035 / 0.043533 (0.004503) | 0.301361 / 0.255139 (0.046222) | 0.321996 / 0.283200 (0.038796) | 0.101818 / 0.141683 (-0.039865) | 1.442601 / 1.452155 (-0.009554) | 1.530669 / 1.492716 (0.037953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201470 / 0.018006 (0.183464) | 0.496305 / 0.000490 (0.495815) | 0.003794 / 0.000200 (0.003594) | 0.000149 / 0.000054 (0.000094) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.107924 / 0.014526 (0.093398) | 0.121716 / 0.176557 (-0.054840) | 0.187407 / 0.737135 (-0.549728) | 0.124755 / 0.296338 (-0.171583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395667 / 0.215209 (0.180457) | 3.939079 / 2.077655 (1.861424) | 1.776308 / 1.504120 (0.272188) | 1.583487 / 1.541195 (0.042292) | 1.682957 / 1.468490 (0.214467) | 0.677322 / 4.584777 (-3.907455) | 3.796987 / 3.745712 (0.051275) | 3.406199 / 5.269862 (-1.863663) | 1.905467 / 4.565676 (-2.660210) | 0.083189 / 0.424275 (-0.341086) | 0.012156 / 0.007607 (0.004549) | 0.507078 / 0.226044 (0.281033) | 5.031293 / 2.268929 (2.762365) | 2.228403 / 55.444624 (-53.216221) | 1.885760 / 6.876477 (-4.990717) | 1.962340 / 2.142072 (-0.179732) | 0.824979 / 4.805227 (-3.980248) | 0.162107 / 6.500664 (-6.338557) | 0.062324 / 0.075469 (-0.013145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205104 / 1.841788 (-0.636683) | 15.368896 / 8.074308 (7.294588) | 14.757540 / 10.191392 (4.566148) | 0.177544 / 0.680424 (-0.502880) | 0.029097 / 0.534201 (-0.505104) | 0.445252 / 0.579283 (-0.134031) | 0.456521 / 0.434364 (0.022157) | 0.544166 / 0.540337 (0.003829) | 0.640675 / 1.386936 (-0.746261) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007438 / 0.011353 (-0.003914) | 0.005236 / 0.011008 (-0.005772) | 0.075379 / 0.038508 (0.036871) | 0.033274 / 0.023109 (0.010165) | 0.344584 / 0.275898 (0.068686) | 0.372161 / 0.323480 (0.048681) | 0.005914 / 0.007986 (-0.002071) | 0.004176 / 0.004328 (-0.000152) | 0.073311 / 0.004250 (0.069061) | 0.050845 / 0.037052 (0.013793) | 0.338978 / 0.258489 (0.080489) | 0.391563 / 0.293841 (0.097722) | 0.037559 / 0.128546 (-0.090987) | 0.012455 / 0.075646 (-0.063192) | 0.086224 / 0.419271 (-0.333047) | 0.052956 / 0.043533 (0.009423) | 0.338529 / 0.255139 (0.083390) | 0.356752 / 0.283200 (0.073553) | 0.105864 / 0.141683 (-0.035819) | 1.467727 / 1.452155 (0.015572) | 1.588727 / 1.492716 (0.096010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215959 / 0.018006 (0.197953) | 0.440619 / 0.000490 (0.440129) | 0.000397 / 0.000200 (0.000197) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028855 / 0.037411 (-0.008556) | 0.114239 / 0.014526 (0.099713) | 0.121726 / 0.176557 (-0.054830) | 0.190377 / 0.737135 (-0.546759) | 0.127858 / 0.296338 (-0.168480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415399 / 0.215209 (0.200190) | 4.159012 / 2.077655 (2.081357) | 1.987593 / 1.504120 (0.483474) | 1.794785 / 1.541195 (0.253591) | 1.924819 / 1.468490 (0.456329) | 0.696082 / 4.584777 (-3.888694) | 3.820461 / 3.745712 (0.074749) | 2.139236 / 5.269862 (-3.130626) | 1.348593 / 4.565676 (-3.217084) | 0.086536 / 0.424275 (-0.337739) | 0.012510 / 0.007607 (0.004902) | 0.518804 / 0.226044 (0.292760) | 5.188659 / 2.268929 (2.919730) | 2.501303 / 55.444624 (-52.943322) | 2.138831 / 6.876477 (-4.737646) | 2.220451 / 2.142072 (0.078378) | 0.836277 / 4.805227 (-3.968950) | 0.170940 / 6.500664 (-6.329724) | 0.067326 / 0.075469 (-0.008143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307848 / 1.841788 (-0.533940) | 15.995785 / 8.074308 (7.921477) | 13.646285 / 10.191392 (3.454893) | 0.181120 / 0.680424 (-0.499304) | 0.017500 / 0.534201 (-0.516701) | 0.426697 / 0.579283 (-0.152586) | 0.436702 / 0.434364 (0.002338) | 0.518060 / 0.540337 (-0.022278) | 0.632577 / 1.386936 (-0.754359) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-22T12:48:52Z
| 2023-02-22T13:05:55Z
| 2023-02-22T12:56:48Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5563",
"merged_at": "2023-02-22T12:56:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5563"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5563/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1117
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1117/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1117/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1117/events
|
https://github.com/huggingface/datasets/pull/1117
| 757,133,789
|
MDExOlB1bGxSZXF1ZXN0NTMyNTYwNzM4
| 1,117
|
Fix incorrect MRQA train+SQuAD URL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6259768?v=4",
"events_url": "https://api.github.com/users/yuxiang-wu/events{/privacy}",
"followers_url": "https://api.github.com/users/yuxiang-wu/followers",
"following_url": "https://api.github.com/users/yuxiang-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/yuxiang-wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuxiang-wu",
"id": 6259768,
"login": "yuxiang-wu",
"node_id": "MDQ6VXNlcjYyNTk3Njg=",
"organizations_url": "https://api.github.com/users/yuxiang-wu/orgs",
"received_events_url": "https://api.github.com/users/yuxiang-wu/received_events",
"repos_url": "https://api.github.com/users/yuxiang-wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuxiang-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuxiang-wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuxiang-wu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks ! could you regenerate the dataset_infos.json file ?\r\n\r\n```\r\ndatasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nalso cc @VictorSanh ",
"Oooops, good catch @jimmycode ",
"> Thanks ! could you regenerate the dataset_infos.json file ?\r\n> \r\n> ```\r\n> datasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> also cc @VictorSanh\r\n\r\nUpdated the `dataset_infos.json` file."
] | 2020-12-04T14:14:26Z
| 2020-12-06T17:14:11Z
| 2020-12-06T17:14:10Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1117.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1117",
"merged_at": "2020-12-06T17:14:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1117.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1117"
}
|
Fix issue #1115
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1117/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1117/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2721/events
|
https://github.com/huggingface/datasets/pull/2721
| 954,238,230
|
MDExOlB1bGxSZXF1ZXN0Njk4MTY0Njg3
| 2,721
|
Deal with the bad check in test_load.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps://github.com/huggingface/datasets/blob/00686c46b7aaf6bfcd4102cec300a3c031284a5a/tests/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for now :)"
] | 2021-07-27T20:23:23Z
| 2021-07-28T09:58:34Z
| 2021-07-28T08:53:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2721.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2721",
"merged_at": "2021-07-28T08:53:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2721.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2721"
}
|
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with:
```python
m_paths = re.findall(r"\S*_dummy/_dummy.py\b", str(exc_info.value)) # on Linux this will match an URL as well as a local_path due to different os.sep, so take the last element (an URL always comes last in the list)
assert len(m_paths) > 0 and is_remote_url(m_paths[-1]) # is_remote_url comes from datasets.utils.file_utils
```
@lhoestq Let me know which one of these two approaches (delete or replace) do you prefer?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2721/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5700
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5700/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5700/events
|
https://github.com/huggingface/datasets/pull/5700
| 1,652,527,530
|
PR_kwDODunzps5Ng6g_
| 5,700
|
fix: fix wrong modification of the 'cache_file_name' -related paramet…
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FrancoisNoyez",
"id": 47528215,
"login": "FrancoisNoyez",
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FrancoisNoyez"
}
|
[] |
open
| false
| null |
[] | null |
[
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.",
"Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that",
"Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.",
"We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately",
"> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?"
] | 2023-04-03T18:05:26Z
| 2023-04-06T17:17:27Z
| null |
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700"
}
|
…ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5700/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1781
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1781/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1781/events
|
https://github.com/huggingface/datasets/issues/1781
| 793,914,556
|
MDU6SXNzdWU3OTM5MTQ1NTY=
| 1,781
|
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45964869?v=4",
"events_url": "https://api.github.com/users/PalaashAgrawal/events{/privacy}",
"followers_url": "https://api.github.com/users/PalaashAgrawal/followers",
"following_url": "https://api.github.com/users/PalaashAgrawal/following{/other_user}",
"gists_url": "https://api.github.com/users/PalaashAgrawal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PalaashAgrawal",
"id": 45964869,
"login": "PalaashAgrawal",
"node_id": "MDQ6VXNlcjQ1OTY0ODY5",
"organizations_url": "https://api.github.com/users/PalaashAgrawal/orgs",
"received_events_url": "https://api.github.com/users/PalaashAgrawal/received_events",
"repos_url": "https://api.github.com/users/PalaashAgrawal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PalaashAgrawal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PalaashAgrawal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PalaashAgrawal"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```",
"We should bump up the version test of pyarrow maybe no?\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes indeed.\r\n\r\nAlso it looks like Pyarrow 3.0.0 got released on pypi 10 hours ago. This might be related to the bug, I'll investigate\r\nEDIT: looks like the 3.0.0 release doesn't have unexpected breaking changes for us, so I don't think the issue comes from that",
"Maybe colab moved to pyarrow 0.16 by default (instead of 0.14 before)?",
"Installing datasets installs pyarrow>=0.17.1 so in theory it doesn't matter which version of pyarrow colab has by default (which is currently pyarrow 0.14.1).\r\n\r\nAlso now the colab runtime refresh the pyarrow version automatically after the update from pip (previously you needed to restart your runtime).\r\n\r\nI guess what happened is that Colab didn't refresh pyarrow for some reason, and the AttributeError was raised *before* the pyarrow version check from `datasets` at https://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes colab doesn’t reload preloaded library unless you restart the instance. Maybe we should move the check on top of the init ",
"Yes I'll do that :)",
"I updated the pyarrow version check in #1782"
] | 2021-01-26T04:18:35Z
| 2022-10-05T12:37:06Z
| 2022-10-05T12:37:06Z
|
NONE
| null | null | null |
I'm using Colab. And suddenly this morning, there is this error. Have a look below!

|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1781/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3611/events
|
https://github.com/huggingface/datasets/issues/3611
| 1,110,399,096
|
I_kwDODunzps5CL1h4
| 3,611
|
Indexing bug after dataset.select()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kamalkraj",
"id": 17096858,
"login": "kamalkraj",
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kamalkraj"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[
"Hi! Thanks for reporting! I've opened a PR with the fix."
] | 2022-01-21T12:09:30Z
| 2022-01-27T18:16:22Z
| 2022-01-27T18:16:22Z
|
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
Dataset indexing is not working as expected after `dataset.select(range(100))`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
task_name = "sst2"
raw_datasets = datasets.load_dataset("glue", task_name)
train_dataset = raw_datasets["train"]
print("before select: ",train_dataset[-2:])
# before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]}
train_dataset = train_dataset.select(range(100))
print("after select: ",train_dataset[-2:])
# after select: {'sentence': [], 'label': [], 'idx': []}
```
link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing
## Expected results
A clear and concise description of the expected results.
showing 98, 99 index data
## Actual results
Specify the actual results or traceback.
empty
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3611/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3611/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1924/events
|
https://github.com/huggingface/datasets/issues/1924
| 813,599,733
|
MDU6SXNzdWU4MTM1OTk3MzM=
| 1,924
|
Anonymous Dataset Addition (i.e Anonymous PR?)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PierreColombo",
"id": 22492839,
"login": "PierreColombo",
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PierreColombo"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok",
"Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?\r\nCheers\r\n",
"Sure, I think it's ok on our side",
"Yup, sounds good!"
] | 2021-02-22T15:22:30Z
| 2022-10-05T13:07:11Z
| 2022-10-05T13:07:11Z
|
CONTRIBUTOR
| null | null | null |
Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1924/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2847
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2847/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2847/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2847/events
|
https://github.com/huggingface/datasets/pull/2847
| 981,589,693
|
MDExOlB1bGxSZXF1ZXN0NzIxNjA3OTA0
| 2,847
|
fix regex to accept negative timezone
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jadermcs",
"id": 7156771,
"login": "jadermcs",
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jadermcs"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-27T20:54:05Z
| 2021-09-13T20:39:50Z
| 2021-09-07T09:34:23Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2847",
"merged_at": "2021-09-07T09:34:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2847"
}
|
fix #2846
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2847/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2847/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5726
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5726/events
|
https://github.com/huggingface/datasets/issues/5726
| 1,660,944,807
|
I_kwDODunzps5jAAGn
| 5,726
|
Fallback JSON Dataset loading does not load all values when features specified manually
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4",
"events_url": "https://api.github.com/users/myluki2000/events{/privacy}",
"followers_url": "https://api.github.com/users/myluki2000/followers",
"following_url": "https://api.github.com/users/myluki2000/following{/other_user}",
"gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/myluki2000",
"id": 3610788,
"login": "myluki2000",
"node_id": "MDQ6VXNlcjM2MTA3ODg=",
"organizations_url": "https://api.github.com/users/myluki2000/orgs",
"received_events_url": "https://api.github.com/users/myluki2000/received_events",
"repos_url": "https://api.github.com/users/myluki2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/myluki2000"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix."
] | 2023-04-10T15:22:14Z
| 2023-04-21T06:35:28Z
| 2023-04-21T06:35:28Z
|
NONE
| null | null | null |
### Describe the bug
The fallback JSON dataset loader located here:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153
does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior?
To fix this you'd have to change this line:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140
To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method.
### Steps to reproduce the bug
Consider a dataset JSON like this:
```
[
{
"instruction": "Do stuff",
"output": "Answer stuff"
},
{
"instruction": "Do stuff2",
"input": "Additional Input2",
"output": "Answer stuff2"
}
]
```
Using this code to load the dataset:
```
from datasets import load_dataset, Features, Value
features = {
"instruction": Value("string"),
"input": Value("string"),
"output": Value("string")
}
features = Features(features)
ds = load_dataset("json", data_files="./ds.json", features=features)
for row in ds["train"]:
print(row)
```
we get a dataset that looks like this:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | None | "Answer Stuff2" |
### Expected behavior
The input column should contain values other than None for dataset entries that have the "input" attribute set:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | "Additional Input2" | "Answer Stuff2" |
### Environment info
Python 3.10.10
Datasets 2.11.0
Windows 10
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/792
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/792/comments
|
https://api.github.com/repos/huggingface/datasets/issues/792/events
|
https://github.com/huggingface/datasets/issues/792
| 734,693,652
|
MDU6SXNzdWU3MzQ2OTM2NTI=
| 792
|
KILT dataset: empty string in triviaqa input field
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] | 2020-11-02T17:33:54Z
| 2020-11-05T10:34:59Z
| 2020-11-05T10:34:59Z
|
CONTRIBUTOR
| null | null | null |
# What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/792/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2005/events
|
https://github.com/huggingface/datasets/issues/2005
| 824,275,035
|
MDU6SXNzdWU4MjQyNzUwMzU=
| 2,005
|
Setting to torch format not working with torchvision and MNIST
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ",
"What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet",
"Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000 #(type, len)\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'int'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n<hr>\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n",
"Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n<ipython-input-95-d28e69289084> in <module>()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in <dictcomp>(.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```",
"# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.",
"What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.",
"Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.",
"I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved."
] | 2021-03-08T07:38:11Z
| 2021-03-09T17:58:13Z
| 2021-03-09T17:58:13Z
|
CONTRIBUTOR
| null | null | null |
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labels = []
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(
np.array(examples["image"][example_idx], dtype=np.uint8)
))
else:
images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8)))
labels.append(torch.tensor(examples["label"][example_idx]))
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('mnist')
train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000)
train_dataset.set_format("torch",columns=["image","label"])
```
After this, I check the type of the following:
```python
print(type(train_dataset["train"]["label"]))
print(type(train_dataset["train"]["image"][0]))
```
This leads to the following output:
```python
<class 'torch.Tensor'>
<class 'list'>
```
I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`.
I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue?
Thanks,
Gunjan
EDIT:
I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28).
EDIT 2:
Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/288
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/288/comments
|
https://api.github.com/repos/huggingface/datasets/issues/288/events
|
https://github.com/huggingface/datasets/issues/288
| 641,888,610
|
MDU6SXNzdWU2NDE4ODg2MTA=
| 288
|
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4",
"events_url": "https://api.github.com/users/wutong8023/events{/privacy}",
"followers_url": "https://api.github.com/users/wutong8023/followers",
"following_url": "https://api.github.com/users/wutong8023/following{/other_user}",
"gists_url": "https://api.github.com/users/wutong8023/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wutong8023",
"id": 14964542,
"login": "wutong8023",
"node_id": "MDQ6VXNlcjE0OTY0NTQy",
"organizations_url": "https://api.github.com/users/wutong8023/orgs",
"received_events_url": "https://api.github.com/users/wutong8023/received_events",
"repos_url": "https://api.github.com/users/wutong8023/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wutong8023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wutong8023/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wutong8023"
}
|
[] |
closed
| false
| null |
[] | null |
[
"It looks like the bug comes from `dill`. Which version of `dill` are you using ?",
"Thank you. It is version 0.2.6, which version is better?",
"0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?",
"Thanks guys! I upgraded dill and it works.",
"Awesome"
] | 2020-06-19T11:01:22Z
| 2020-06-21T09:05:11Z
| 2020-06-21T09:05:11Z
|
NONE
| null | null | null |
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module>
import nlp
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module>
from .arrow_dataset import Dataset
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module>
from nlp.utils.py_utils import dumps
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module>
from .download_manager import DownloadManager, GenerateMode
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module>
from .py_utils import flatten_nested, map_nested, size_str
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module>
class Pickler(dill.Pickler):
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill'
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/288/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6377
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6377/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6377/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6377/events
|
https://github.com/huggingface/datasets/issues/6377
| 1,973,937,612
|
I_kwDODunzps51p-XM
| 6,377
|
Support pyarrow 14.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-11-02T10:22:08Z
| 2023-11-02T15:15:45Z
| 2023-11-02T15:15:45Z
|
MEMBER
| null | null | null |
Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6377/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6377/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4304
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4304/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4304/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4304/events
|
https://github.com/huggingface/datasets/issues/4304
| 1,231,047,051
|
I_kwDODunzps5JYEmL
| 4,304
|
Language code search does direct matches
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now."
] | 2022-05-10T11:59:16Z
| 2022-05-10T12:38:42Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search.
## Steps to reproduce the bug
1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL))
2. Look for datasets using the full code
3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq))
Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`.
One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :)
## Expected results
Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`).
## Actual results
The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches.
## Environment info
(web app)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4304/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4304/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5867/events
|
https://github.com/huggingface/datasets/pull/5867
| 1,710,656,067
|
PR_kwDODunzps5QizOn
| 5,867
|
Add logic for hashing modules/functions optimized with `torch.compile`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004565 / 0.011008 (-0.006443) | 0.099063 / 0.038508 (0.060555) | 0.028334 / 0.023109 (0.005225) | 0.323539 / 0.275898 (0.047641) | 0.372462 / 0.323480 (0.048982) | 0.005120 / 0.007986 (-0.002865) | 0.004797 / 0.004328 (0.000468) | 0.076862 / 0.004250 (0.072611) | 0.038021 / 0.037052 (0.000968) | 0.337801 / 0.258489 (0.079312) | 0.374601 / 0.293841 (0.080760) | 0.031158 / 0.128546 (-0.097389) | 0.011672 / 0.075646 (-0.063974) | 0.324913 / 0.419271 (-0.094359) | 0.051702 / 0.043533 (0.008169) | 0.339440 / 0.255139 (0.084301) | 0.372502 / 0.283200 (0.089303) | 0.097590 / 0.141683 (-0.044093) | 1.534238 / 1.452155 (0.082083) | 1.599701 / 1.492716 (0.106985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204101 / 0.018006 (0.186095) | 0.416981 / 0.000490 (0.416491) | 0.003436 / 0.000200 (0.003236) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023527 / 0.037411 (-0.013885) | 0.095748 / 0.014526 (0.081222) | 0.104498 / 0.176557 (-0.072059) | 0.164000 / 0.737135 (-0.573135) | 0.109170 / 0.296338 (-0.187168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418239 / 0.215209 (0.203030) | 4.153959 / 2.077655 (2.076305) | 1.856687 / 1.504120 (0.352567) | 1.657818 / 1.541195 (0.116623) | 1.715146 / 1.468490 (0.246656) | 0.700673 / 4.584777 (-3.884103) | 3.401060 / 3.745712 (-0.344652) | 2.891045 / 5.269862 (-2.378816) | 1.519433 / 4.565676 (-3.046243) | 0.083151 / 0.424275 (-0.341124) | 0.012352 / 0.007607 (0.004745) | 0.523901 / 0.226044 (0.297856) | 5.288871 / 2.268929 (3.019943) | 2.322806 / 55.444624 (-53.121818) | 1.982223 / 6.876477 (-4.894253) | 2.074883 / 2.142072 (-0.067189) | 0.812400 / 4.805227 (-3.992827) | 0.152183 / 6.500664 (-6.348481) | 0.066538 / 0.075469 (-0.008931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223220 / 1.841788 (-0.618567) | 14.024391 / 8.074308 (5.950083) | 14.166657 / 10.191392 (3.975265) | 0.146017 / 0.680424 (-0.534407) | 0.016698 / 0.534201 (-0.517503) | 0.380779 / 0.579283 (-0.198504) | 0.387113 / 0.434364 (-0.047251) | 0.446329 / 0.540337 (-0.094009) | 0.523819 / 1.386936 (-0.863118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006803 / 0.011353 (-0.004549) | 0.004554 / 0.011008 (-0.006454) | 0.077406 / 0.038508 (0.038897) | 0.028495 / 0.023109 (0.005386) | 0.358847 / 0.275898 (0.082949) | 0.393256 / 0.323480 (0.069776) | 0.005317 / 0.007986 (-0.002669) | 0.004690 / 0.004328 (0.000362) | 0.075842 / 0.004250 (0.071592) | 0.041985 / 0.037052 (0.004933) | 0.367546 / 0.258489 (0.109057) | 0.408019 / 0.293841 (0.114178) | 0.030712 / 0.128546 (-0.097834) | 0.011756 / 0.075646 (-0.063891) | 0.086002 / 0.419271 (-0.333269) | 0.038949 / 0.043533 (-0.004583) | 0.361045 / 0.255139 (0.105906) | 0.381728 / 0.283200 (0.098528) | 0.090692 / 0.141683 (-0.050991) | 1.493251 / 1.452155 (0.041097) | 1.584566 / 1.492716 (0.091850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217470 / 0.018006 (0.199463) | 0.429955 / 0.000490 (0.429465) | 0.000394 / 0.000200 (0.000194) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026223 / 0.037411 (-0.011189) | 0.102570 / 0.014526 (0.088045) | 0.110848 / 0.176557 (-0.065709) | 0.162413 / 0.737135 (-0.574722) | 0.114579 / 0.296338 (-0.181760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464957 / 0.215209 (0.249748) | 4.656597 / 2.077655 (2.578942) | 2.279755 / 1.504120 (0.775636) | 2.230263 / 1.541195 (0.689068) | 2.341540 / 1.468490 (0.873050) | 0.699505 / 4.584777 (-3.885272) | 3.389003 / 3.745712 (-0.356709) | 1.867526 / 5.269862 (-3.402336) | 1.167171 / 4.565676 (-3.398506) | 0.083451 / 0.424275 (-0.340824) | 0.012348 / 0.007607 (0.004741) | 0.584205 / 0.226044 (0.358161) | 5.853623 / 2.268929 (3.584694) | 2.646650 / 55.444624 (-52.797974) | 2.286504 / 6.876477 (-4.589973) | 2.327536 / 2.142072 (0.185464) | 0.811209 / 4.805227 (-3.994018) | 0.151842 / 6.500664 (-6.348822) | 0.067783 / 0.075469 (-0.007686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330427 / 1.841788 (-0.511360) | 14.668981 / 8.074308 (6.594673) | 13.321154 / 10.191392 (3.129762) | 0.164383 / 0.680424 (-0.516040) | 0.016667 / 0.534201 (-0.517534) | 0.383439 / 0.579283 (-0.195844) | 0.392988 / 0.434364 (-0.041376) | 0.443318 / 0.540337 (-0.097020) | 0.537849 / 1.386936 (-0.849087) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.004691 / 0.011008 (-0.006317) | 0.098047 / 0.038508 (0.059539) | 0.028126 / 0.023109 (0.005017) | 0.327143 / 0.275898 (0.051245) | 0.362482 / 0.323480 (0.039002) | 0.004953 / 0.007986 (-0.003033) | 0.003386 / 0.004328 (-0.000943) | 0.076222 / 0.004250 (0.071971) | 0.037583 / 0.037052 (0.000531) | 0.329661 / 0.258489 (0.071172) | 0.365945 / 0.293841 (0.072104) | 0.030455 / 0.128546 (-0.098091) | 0.011397 / 0.075646 (-0.064249) | 0.323889 / 0.419271 (-0.095383) | 0.043719 / 0.043533 (0.000186) | 0.331499 / 0.255139 (0.076360) | 0.359357 / 0.283200 (0.076158) | 0.088904 / 0.141683 (-0.052779) | 1.458584 / 1.452155 (0.006429) | 1.549375 / 1.492716 (0.056658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195808 / 0.018006 (0.177802) | 0.411148 / 0.000490 (0.410659) | 0.003602 / 0.000200 (0.003402) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023278 / 0.037411 (-0.014133) | 0.097317 / 0.014526 (0.082791) | 0.102669 / 0.176557 (-0.073888) | 0.168203 / 0.737135 (-0.568933) | 0.105205 / 0.296338 (-0.191133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424800 / 0.215209 (0.209591) | 4.228444 / 2.077655 (2.150790) | 1.895544 / 1.504120 (0.391424) | 1.698793 / 1.541195 (0.157598) | 1.717931 / 1.468490 (0.249441) | 0.702251 / 4.584777 (-3.882526) | 3.407013 / 3.745712 (-0.338699) | 2.784634 / 5.269862 (-2.485228) | 1.491317 / 4.565676 (-3.074359) | 0.082926 / 0.424275 (-0.341350) | 0.012320 / 0.007607 (0.004713) | 0.524188 / 0.226044 (0.298143) | 5.249798 / 2.268929 (2.980870) | 2.358953 / 55.444624 (-53.085672) | 1.985922 / 6.876477 (-4.890555) | 2.034293 / 2.142072 (-0.107779) | 0.815671 / 4.805227 (-3.989556) | 0.152583 / 6.500664 (-6.348081) | 0.066687 / 0.075469 (-0.008782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210901 / 1.841788 (-0.630886) | 13.621765 / 8.074308 (5.547457) | 14.213215 / 10.191392 (4.021823) | 0.143346 / 0.680424 (-0.537078) | 0.016904 / 0.534201 (-0.517297) | 0.379795 / 0.579283 (-0.199489) | 0.381287 / 0.434364 (-0.053077) | 0.449086 / 0.540337 (-0.091251) | 0.538792 / 1.386936 (-0.848144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006207 / 0.011353 (-0.005146) | 0.004404 / 0.011008 (-0.006604) | 0.076363 / 0.038508 (0.037854) | 0.027335 / 0.023109 (0.004226) | 0.370967 / 0.275898 (0.095069) | 0.401936 / 0.323480 (0.078456) | 0.004835 / 0.007986 (-0.003151) | 0.004559 / 0.004328 (0.000231) | 0.074964 / 0.004250 (0.070713) | 0.038254 / 0.037052 (0.001202) | 0.374799 / 0.258489 (0.116310) | 0.425191 / 0.293841 (0.131350) | 0.035290 / 0.128546 (-0.093256) | 0.011379 / 0.075646 (-0.064267) | 0.085911 / 0.419271 (-0.333360) | 0.043073 / 0.043533 (-0.000460) | 0.373557 / 0.255139 (0.118418) | 0.395179 / 0.283200 (0.111979) | 0.098602 / 0.141683 (-0.043081) | 1.467234 / 1.452155 (0.015079) | 1.571868 / 1.492716 (0.079152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221848 / 0.018006 (0.203842) | 0.394943 / 0.000490 (0.394454) | 0.002983 / 0.000200 (0.002783) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024385 / 0.037411 (-0.013027) | 0.100087 / 0.014526 (0.085561) | 0.104897 / 0.176557 (-0.071660) | 0.156150 / 0.737135 (-0.580985) | 0.109113 / 0.296338 (-0.187226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441995 / 0.215209 (0.226786) | 4.415423 / 2.077655 (2.337769) | 2.148791 / 1.504120 (0.644671) | 1.947061 / 1.541195 (0.405866) | 1.954807 / 1.468490 (0.486317) | 0.690245 / 4.584777 (-3.894532) | 3.372766 / 3.745712 (-0.372946) | 1.851073 / 5.269862 (-3.418789) | 1.155558 / 4.565676 (-3.410118) | 0.082796 / 0.424275 (-0.341479) | 0.012845 / 0.007607 (0.005238) | 0.548173 / 0.226044 (0.322129) | 5.530984 / 2.268929 (3.262056) | 2.665360 / 55.444624 (-52.779264) | 2.324266 / 6.876477 (-4.552211) | 2.329397 / 2.142072 (0.187324) | 0.801481 / 4.805227 (-4.003746) | 0.152145 / 6.500664 (-6.348519) | 0.067915 / 0.075469 (-0.007554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291488 / 1.841788 (-0.550299) | 13.912143 / 8.074308 (5.837835) | 12.975493 / 10.191392 (2.784101) | 0.129915 / 0.680424 (-0.550509) | 0.016516 / 0.534201 (-0.517685) | 0.386979 / 0.579283 (-0.192304) | 0.389163 / 0.434364 (-0.045201) | 0.443324 / 0.540337 (-0.097014) | 0.533744 / 1.386936 (-0.853192) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5867). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002717) | 0.006014 / 0.011008 (-0.004995) | 0.116314 / 0.038508 (0.077806) | 0.041113 / 0.023109 (0.018004) | 0.358564 / 0.275898 (0.082666) | 0.397547 / 0.323480 (0.074067) | 0.007012 / 0.007986 (-0.000974) | 0.004638 / 0.004328 (0.000310) | 0.086509 / 0.004250 (0.082259) | 0.056731 / 0.037052 (0.019678) | 0.358859 / 0.258489 (0.100370) | 0.425339 / 0.293841 (0.131498) | 0.041780 / 0.128546 (-0.086767) | 0.014203 / 0.075646 (-0.061443) | 0.398240 / 0.419271 (-0.021031) | 0.060180 / 0.043533 (0.016647) | 0.352887 / 0.255139 (0.097748) | 0.381793 / 0.283200 (0.098594) | 0.148578 / 0.141683 (0.006895) | 1.749483 / 1.452155 (0.297328) | 1.869765 / 1.492716 (0.377049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244435 / 0.018006 (0.226428) | 0.499545 / 0.000490 (0.499055) | 0.004576 / 0.000200 (0.004376) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031163 / 0.037411 (-0.006249) | 0.131082 / 0.014526 (0.116556) | 0.137442 / 0.176557 (-0.039114) | 0.203783 / 0.737135 (-0.533352) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503587 / 0.215209 (0.288378) | 5.011953 / 2.077655 (2.934299) | 2.366968 / 1.504120 (0.862848) | 2.130914 / 1.541195 (0.589719) | 2.243560 / 1.468490 (0.775070) | 0.856719 / 4.584777 (-3.728058) | 4.707445 / 3.745712 (0.961733) | 2.506166 / 5.269862 (-2.763696) | 1.590400 / 4.565676 (-2.975277) | 0.102075 / 0.424275 (-0.322200) | 0.014499 / 0.007607 (0.006892) | 0.624966 / 0.226044 (0.398922) | 6.197671 / 2.268929 (3.928742) | 2.898481 / 55.444624 (-52.546143) | 2.499590 / 6.876477 (-4.376886) | 2.649690 / 2.142072 (0.507617) | 1.012542 / 4.805227 (-3.792685) | 0.202833 / 6.500664 (-6.297831) | 0.078033 / 0.075469 (0.002564) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.448321 / 1.841788 (-0.393467) | 18.084909 / 8.074308 (10.010601) | 17.383027 / 10.191392 (7.191635) | 0.212167 / 0.680424 (-0.468256) | 0.020754 / 0.534201 (-0.513447) | 0.514653 / 0.579283 (-0.064630) | 0.543307 / 0.434364 (0.108944) | 0.653066 / 0.540337 (0.112728) | 0.745773 / 1.386936 (-0.641164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008576 / 0.011353 (-0.002777) | 0.005834 / 0.011008 (-0.005174) | 0.089842 / 0.038508 (0.051334) | 0.040035 / 0.023109 (0.016926) | 0.449329 / 0.275898 (0.173431) | 0.471572 / 0.323480 (0.148092) | 0.006771 / 0.007986 (-0.001215) | 0.006129 / 0.004328 (0.001800) | 0.090370 / 0.004250 (0.086119) | 0.056924 / 0.037052 (0.019872) | 0.455134 / 0.258489 (0.196645) | 0.502670 / 0.293841 (0.208829) | 0.041689 / 0.128546 (-0.086857) | 0.014447 / 0.075646 (-0.061200) | 0.104528 / 0.419271 (-0.314744) | 0.055535 / 0.043533 (0.012003) | 0.450667 / 0.255139 (0.195528) | 0.453108 / 0.283200 (0.169908) | 0.119296 / 0.141683 (-0.022387) | 1.747359 / 1.452155 (0.295204) | 1.839421 / 1.492716 (0.346705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314910 / 0.018006 (0.296904) | 0.495575 / 0.000490 (0.495085) | 0.054702 / 0.000200 (0.054503) | 0.000505 / 0.000054 (0.000450) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033991 / 0.037411 (-0.003420) | 0.133268 / 0.014526 (0.118742) | 0.142286 / 0.176557 (-0.034271) | 0.200562 / 0.737135 (-0.536573) | 0.147161 / 0.296338 (-0.149178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520288 / 0.215209 (0.305079) | 5.227684 / 2.077655 (3.150029) | 2.553330 / 1.504120 (1.049210) | 2.324338 / 1.541195 (0.783143) | 2.406790 / 1.468490 (0.938300) | 0.850404 / 4.584777 (-3.734373) | 4.612156 / 3.745712 (0.866444) | 2.592546 / 5.269862 (-2.677316) | 1.708984 / 4.565676 (-2.856692) | 0.103751 / 0.424275 (-0.320524) | 0.014379 / 0.007607 (0.006772) | 0.634661 / 0.226044 (0.408616) | 6.344939 / 2.268929 (4.076010) | 3.179807 / 55.444624 (-52.264817) | 2.831856 / 6.876477 (-4.044621) | 2.866729 / 2.142072 (0.724656) | 0.994519 / 4.805227 (-3.810708) | 0.201566 / 6.500664 (-6.299098) | 0.078902 / 0.075469 (0.003433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538738 / 1.841788 (-0.303049) | 18.746367 / 8.074308 (10.672059) | 16.504763 / 10.191392 (6.313371) | 0.197898 / 0.680424 (-0.482526) | 0.020469 / 0.534201 (-0.513732) | 0.529106 / 0.579283 (-0.050177) | 0.536891 / 0.434364 (0.102527) | 0.600947 / 0.540337 (0.060610) | 0.701713 / 1.386936 (-0.685223) |\n\n</details>\n</details>\n\n\n",
"Closing in favor of #6454 "
] | 2023-05-15T19:03:35Z
| 2023-11-27T20:03:32Z
| 2023-11-27T20:03:31Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5867.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5867",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5867.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5867"
}
|
Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5867/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/832
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/832/comments
|
https://api.github.com/repos/huggingface/datasets/issues/832/events
|
https://github.com/huggingface/datasets/issues/832
| 740,077,228
|
MDU6SXNzdWU3NDAwNzcyMjg=
| 832
|
[GEM] add WikiAuto text simplification dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[] | 2020-11-10T16:53:23Z
| 2020-12-03T13:38:08Z
| 2020-12-03T13:38:08Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf
- **Data:** https://github.com/chaojiang06/wiki-auto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2840
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2840/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2840/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2840/events
|
https://github.com/huggingface/datasets/issues/2840
| 980,489,074
|
MDU6SXNzdWU5ODA0ODkwNzQ=
| 2,840
|
How can I compute BLEU-4 score use `load_metric` ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4",
"events_url": "https://api.github.com/users/Doragd/events{/privacy}",
"followers_url": "https://api.github.com/users/Doragd/followers",
"following_url": "https://api.github.com/users/Doragd/following{/other_user}",
"gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Doragd",
"id": 26213546,
"login": "Doragd",
"node_id": "MDQ6VXNlcjI2MjEzNTQ2",
"organizations_url": "https://api.github.com/users/Doragd/orgs",
"received_events_url": "https://api.github.com/users/Doragd/received_events",
"repos_url": "https://api.github.com/users/Doragd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doragd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Doragd"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-26T17:36:37Z
| 2021-08-27T08:13:24Z
| 2021-08-27T08:13:24Z
|
NONE
| null | null | null |
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4.
If I want to compute BLEU-4 score, what can i do?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2840/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2840/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/27
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/27/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/27/comments
|
https://api.github.com/repos/huggingface/datasets/issues/27/events
|
https://github.com/huggingface/datasets/pull/27
| 610,230,476
|
MDExOlB1bGxSZXF1ZXN0NDExNzA5OTc0
| 27
|
[Cleanup] Removes all files in testing except test_dataset_common
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-04-30T16:45:21Z
| 2020-04-30T17:39:25Z
| 2020-04-30T17:39:23Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/27.diff",
"html_url": "https://github.com/huggingface/datasets/pull/27",
"merged_at": "2020-04-30T17:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/27.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/27"
}
|
As far as I know, all files in `tests` were old `tfds test files` so I removed them. We can still look them up on the other library.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/27/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/27/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2166
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2166/events
|
https://github.com/huggingface/datasets/issues/2166
| 849,778,545
|
MDU6SXNzdWU4NDk3Nzg1NDU=
| 2,166
|
Regarding Test Sets for the GEM datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vyraun",
"id": 17217068,
"login": "vyraun",
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"repos_url": "https://api.github.com/users/vyraun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vyraun"
}
|
[
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann",
"Oh okay, thanks @yjernite ! "
] | 2021-04-04T02:02:45Z
| 2021-04-06T08:13:12Z
| 2021-04-06T08:13:12Z
|
NONE
| null | null | null |
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test'][0]
{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2097
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2097/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2097/events
|
https://github.com/huggingface/datasets/pull/2097
| 838,105,289
|
MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3
| 2,097
|
fixes issue #1110 by descending further if `obj["_type"]` is a dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dcfidalgo",
"id": 15979778,
"login": "dcfidalgo",
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dcfidalgo"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-22T21:00:55Z
| 2021-03-22T21:01:11Z
| 2021-03-22T21:01:11Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2097.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2097",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2097.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2097"
}
|
Check metrics
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2097/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2097/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3791
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3791/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3791/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3791/events
|
https://github.com/huggingface/datasets/pull/3791
| 1,150,733,475
|
PR_kwDODunzps4zevU2
| 3,791
|
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-02-25T18:26:35Z
| 2022-03-01T13:10:43Z
| 2022-03-01T13:10:42Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3791",
"merged_at": "2022-03-01T13:10:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3791"
}
|
As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3791/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3791/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4326
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4326/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4326/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4326/events
|
https://github.com/huggingface/datasets/pull/4326
| 1,233,818,489
|
PR_kwDODunzps43tjWy
| 4,326
|
Fix type hint and documentation for `new_fingerprint`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fxmarty",
"id": 9808326,
"login": "fxmarty",
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fxmarty"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T11:05:08Z
| 2022-06-01T13:04:45Z
| 2022-06-01T12:56:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4326",
"merged_at": "2022-06-01T12:56:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4326"
}
|
Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.
The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454
for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4326/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4326/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5424
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5424/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5424/events
|
https://github.com/huggingface/datasets/issues/5424
| 1,534,394,756
|
I_kwDODunzps5bdQGE
| 5,424
|
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/macabdul9",
"id": 25720695,
"login": "macabdul9",
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/macabdul9"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"test\", from_=0, to=5, unit='%', rounding='closest')\r\n]\r\n\r\ndataset = load_dataset('csv', data_dir=\"data/\", data_files={\"train\":\"train.tsv\", \"dev\":\"dev.tsv\", \"test\":\"test.tsv\"}, delimiter=\"\\t\", split={inst.split_name: inst for inst in instructions})\r\n```\r\n"
] | 2023-01-16T06:54:28Z
| 2023-02-24T16:19:00Z
| 2023-02-24T16:19:00Z
|
NONE
| null | null | null |
### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduce the behaviour:
1. Import
`from datasets import load_dataset, ReadInstruction`
2. Instruction to load the dataset
```
instructions = [
ReadInstruction(split_name="train", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="dev", from_=0, to=10, unit='%', rounding='closest'),
ReadInstruction(split_name="test", from_=0, to=5, unit='%', rounding='closest')
]
```
3. Load
`dataset = load_dataset('csv', data_dir="data/", data_files={"train":"train.tsv", "dev":"dev.tsv", "test":"test.tsv"}, delimiter="\t", split=instructions)`
### Expected behavior
**Current behaviour**

:
**Expected behaviour**

### Environment info
``datasets==2.8.0
``
`Python==3.8.5
`
`Platform - Ubuntu 20.04.4 LTS`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5424/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1900/events
|
https://github.com/huggingface/datasets/pull/1900
| 810,512,488
|
MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3
| 1,900
|
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
}
|
[] |
closed
| false
| null |
[] | null |
[
"OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!"
] | 2021-02-17T20:26:04Z
| 2021-02-19T18:27:11Z
| 2021-02-19T18:27:11Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"merged_at": "2021-02-19T18:27:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1900"
}
|
Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant:
```
def __post_init__(self):
if self.dtype == "double": # fix inferred type
self.dtype = "float64"
if self.dtype == "float": # fix inferred type
self.dtype = "float32"
```
However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that.
The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1900/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3900/events
|
https://github.com/huggingface/datasets/pull/3900
| 1,167,224,903
|
PR_kwDODunzps40VxRh
| 3,900
|
Add MetaShift dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Please could you review this when you get time. Thank you.",
"Thanks a lot for your inputs @mariosasko .\r\n> Maybe we can add the generated meta-graphs to the card as images (with attributions)?\r\n\r\nYes. We can do this for the default set of classes. Will add this.\r\n\r\n> Would be cool if we could have them as additional configs. Also, maybe we could have configs that expose [image metadata](https://github.com/Weixin-Liang/MetaShift/tree/main/dataset/meta_data) from the https://nlp.stanford.edu/data/gqa/sceneGraphs.zip file (this file is downloaded in the script but not used).\r\n\r\nI'll try adding the bonus section as additional config. \r\nRegarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n",
"> Regarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n\r\nOh, I forgot to mention that. Let's add a `Dataset Usage` section to the card to document the params (similar to this: https://huggingface.co/datasets/electricity_load_diagrams#dataset-usage). Also, feel free to add the constants that can be tuned as config params (e.g. `IMAGE_SUBSET_SIZE_THRESHOLD` or the `5` in `len(subject_data) <= 5`).",
"Okay. Got it. Will add these and constants as config parameters.\r\n\r\nThe image metadata from scene graphs looks like this : \r\n```json\r\n{\r\n \"2407890\": {\r\n \"width\": 640,\r\n \"height\": 480,\r\n \"location\": \"living room\",\r\n \"weather\": none,\r\n \"objects\": {\r\n \"271881\": {\r\n \"name\": \"chair\",\r\n \"x\": 220,\r\n \"y\": 310,\r\n \"w\": 50,\r\n \"h\": 80,\r\n \"attributes\": [\"brown\", \"wooden\", \"small\"],\r\n \"relations\": {\r\n \"32452\": {\r\n \"name\": \"on\",\r\n \"object\": \"275312\"\r\n },\r\n \"32452\": {\r\n \"name\": \"near\",\r\n \"object\": \"279472\"\r\n } \r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n``load_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...], image_metadata=True)``\r\nHow do we showcase/display the image metadata(json) information ?\r\n",
"> How do we showcase/display the image metadata(json) information ?\r\n\r\nWe can add the JSON fields as keys to the features dict:\r\n```python\r\n if self.config.image_metadata:\r\n features.update({\"width\": Value(\"int\"), \"height\": Value(\"int\"), \"location\": Value(\"string\"), ...}) \r\n```\r\n\r\nP.S. Would rename `image_metadata` to `with_image_metadata` ",
"I have added the following : \r\n- Added the meta-graphs to the card as images under the Section \"Dataset Meta-Graphs\".\r\n- Generate the Attributes-Dataset using config parameter. [ [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]\r\n- Expose image metadata using config parameter.\r\nFormat of the image metadata is as follows : [Link](https://cs.stanford.edu/people/dorarad/gqa/download.html)\r\nI have modified the \"Objects\" which is dict to a list of dicts with an additional parameter named object_id. \r\nI have defined the structure as follows : \r\n```\r\n{\r\n \"width\": datasets.Value(\"int64\"),\r\n \"height\": datasets.Value(\"int64\"),\r\n \"location\": datasets.Value(\"string\"),\r\n \"weather\": datasets.Value(\"string\"),\r\n \"objects\": datasets.Sequence(\r\n {\r\n \"object_id\": datasets.Value(\"string\"),\r\n \"name\": datasets.Value(\"string\"),\r\n \"x\": datasets.Value(\"int64\"),\r\n \"y\": datasets.Value(\"int64\"),\r\n \"w\": datasets.Value(\"int64\"),\r\n \"h\": datasets.Value(\"int64\"),\r\n \"attributes\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"relations\": datasets.Sequence(\r\n {\r\n \"name\": datasets.Value(\"string\"),\r\n \"object\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n }\r\n ),\r\n}\r\n```\r\nProblem is that objects is not being shown as list of dicts. The output looks as follows : \r\n\r\n> metashift_dataset['train'][0]\r\n\r\n```json \r\n{'image_id': '2338755', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x281 at 0x7F066C5A49D0>, 'label': 0, 'context': 'ground', 'width': 500, 'height': 281, 'location': None, 'weather': None, 'objects': {'object_id': ['3070704', '3070705', '3070706', '2416713', '3070702', '2790660', '3063157', '2354960', '2037127', '2392939', '2912743', '2125407', '2735257', '3260906', '2351018', '3288269', '3699852', '2734378', '3421201', '2863115'], 'name': ['bicycle', 'bicycle', 'bicycle', 'boot', 'bicycle', 'motorcycle', 'pepperoni', 'head', 'building', 'wall', 'shorts', 'people', 'wheel', 'bricks', 'man', 'cat', 'boot', 'door', 'ground', 'building'], 'x': [137, 371, 458, 215, 468, 399, 368, 245, 0, 140, 260, 284, 138, 451, 339, 187, 210, 26, 0, 313], 'y': [116, 86, 94, 150, 91, 80, 107, 22, 0, 44, 109, 69, 145, 226, 69, 22, 230, 0, 119, 0], 'w': [197, 27, 15, 73, 24, 53, 9, 37, 289, 46, 43, 30, 74, 28, 35, 116, 53, 107, 500, 55], 'h': [126, 25, 38, 128, 43, 50, 16, 44, 158, 73, 51, 52, 97, 15, 73, 252, 46, 147, 162, 77], 'attributes': [[], [], [], ['white'], [], [], [], [], [], [], [], [], [], [], [], ['white'], ['white'], ['large', 'black'], ['brick'], []], 'relations': [{'name': ['to the left of'], 'object': ['3260906']}, {'name': ['to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['3070706', '2351018', '2125407', '2790660', '2037127', '3070702', '3288269']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the right of'], 'object': ['2351018', '3070705', '3070702', '2790660', '3063157']}, {'name': ['to the right of'], 'object': ['2735257']}, {'name': ['to the right of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['2351018', '2790660', '3070706', '3070705', '3063157']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['3070705', '2351018', '3070702', '3070706', '3063157', '2125407', '2037127', '3288269']}, {'name': ['to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['2037127', '3070706', '3070702', '2912743', '3288269', '2790660', '2125407']}, {'name': ['to the left of', 'to the right of'], 'object': ['2863115', '2734378']}, {'name': ['to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['3070705', '2351018', '3063157', '2125407', '2790660', '2863115']}, {'name': ['to the left of', 'to the right of', 'to the left of'], 'object': ['2125407', '2734378', '3288269']}, {'name': ['to the left of', 'on', 'to the left of'], 'object': ['2351018', '3288269', '3063157']}, {'name': ['to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'to the left of'], 'object': ['3063157', '2351018', '2037127', '3070705', '2392939', '2790660']}, {'name': ['to the left of', 'to the left of'], 'object': ['2416713', '3288269']}, {'name': ['to the right of'], 'object': ['3070704']}, {'name': ['to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'walking down'], 'object': ['2037127', '2790660', '2125407', '3070705', '3070706', '2912743', '3070702', '3288269', '3421201']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2392939', '2734378', '2790660', '2735257', '3063157', '3070705', '2351018', '2863115']}, {'name': [], 'object': []}, {'name': ['of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2037127', '2354960', '3288269', '2392939']}, {'name': [], 'object': []}, {'name': ['to the right of', 'to the right of', 'to the right of'], 'object': ['2037127', '3288269', '2354960']}]}}\r\n```\r\nExpected output of image_metadata would be : \r\n```\r\n{'height': 281,\r\n 'location': None,\r\n 'objects': [{'attributes': [],\r\n 'h': 126,\r\n 'name': 'bicycle',\r\n 'object_id': '3070704',\r\n 'relations': [{'name': 'to the left of', 'object': '3260906'}],\r\n 'w': 197,\r\n 'x': 137,\r\n 'y': 116},\r\n {'attributes': [],\r\n 'h': 25,\r\n 'name': 'bicycle',\r\n 'object_id': '3070705',\r\n 'relations': [{'name': 'to the left of', 'object': '3070706'},\r\n {'name': 'to the right of', 'object': '2351018'},\r\n {'name': 'to the right of', 'object': '2125407'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '3070702'},\r\n {'name': 'to the right of', 'object': '3288269'}],\r\n 'w': 27,\r\n 'x': 371,\r\n 'y': 86},\r\n {'attributes': ['white'],\r\n 'h': 252,\r\n 'name': 'cat',\r\n 'object_id': '3288269',\r\n 'relations': [{'name': 'to the right of', 'object': '2392939'},\r\n {'name': 'to the right of', 'object': '2734378'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2735257'},\r\n {'name': 'to the left of', 'object': '3063157'},\r\n {'name': 'to the left of', 'object': '3070705'},\r\n {'name': 'to the left of', 'object': '2351018'},\r\n {'name': 'to the left of', 'object': '2863115'}],\r\n 'w': 116,\r\n 'x': 187,\r\n 'y': 22},\r\n {'attributes': ['white'],\r\n 'h': 46,\r\n 'name': 'boot',\r\n 'object_id': '3699852',\r\n 'relations': [],\r\n 'w': 53,\r\n 'x': 210,\r\n 'y': 230},\r\n .\r\n .\r\n .\r\n {'attributes': ['large', 'black'],\r\n 'h': 147,\r\n 'name': 'door',\r\n 'object_id': '2734378',\r\n 'relations': [{'name': 'of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '2354960'},\r\n {'name': 'to the left of', 'object': '3288269'},\r\n {'name': 'to the left of', 'object': '2392939'}],\r\n 'w': 107,\r\n 'x': 26,\r\n 'y': 0},\r\n {'attributes': ['brick'],\r\n 'h': 162,\r\n 'name': 'ground',\r\n 'object_id': '3421201',\r\n 'relations': [],\r\n 'w': 500,\r\n 'x': 0,\r\n 'y': 119},\r\n {'attributes': [],\r\n 'h': 77,\r\n 'name': 'building',\r\n 'object_id': '2863115',\r\n 'relations': [{'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the right of', 'object': '3288269'},\r\n {'name': 'to the right of', 'object': '2354960'}],\r\n 'w': 55,\r\n 'x': 313,\r\n 'y': 0}],\r\n 'weather': None,\r\n 'width': 500}\r\n\r\n```\r\n\r\nMay I know how to get the list of dicts representation correctly ?\r\n\r\n---\r\nTo-Do : \r\n\r\n- [x] Generate dataset_infos.json file.\r\n- [x] Add “Dataset Usage” section in the cards and write about the config parameters. \r\n- [x] Add the constants that can be tuned as config params.\r\n",
"> Problem is that objects is not being shown as list of dicts. The output looks as follows :\r\n\r\nThat's expected. We convert a sequence of dictionaries to a dictionary of sequences to keep the formatting aligned with Tensorflow Datasets. You could disable this behavior by replacing `\"objects\": datasets.Sequence(object_fields_dict)` with `\"objects\": [object_fields_dict]`, but that's not what we usually do, so let's keep it like that. \r\n\r\nAlso, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the `src` attribute (and specify `alt` in case the URLs go down).\r\n\r\nI'll do a proper review again after you are finished with the dummy data.",
"> That's expected.\r\n\r\nOkay. Got it. Thanks. I thought I was doing something wrong.\r\n\r\n> Also, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the src attribute (and specify alt in case the URLs go down).\r\n\r\nSure. Where do we host these images ? Can I upload them to any free image hosting platform or is there any particular website you use ?\r\n\r\n> I'll do a proper review again after you are finished with the dummy data.\r\n\r\nSure. Thanks. I'm working on this part. Will update you.\r\n",
"Update : \r\n- I have generated the dataset_infos.json file.\r\n\r\n> I suggest you try to generate the dataset_infos.json file first, and then I can help with the dummy data.\r\n\r\nI am having issues creating the dummy data. I get the following which I use the command : \r\n\r\n`datasets-cli dummy_data datasets/metashift`\r\n\r\n```\r\nDataset metashift with config MetashiftConfig(name='metashift', version=1.0.0, data_dir=None, data_files=None, description=None) seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/datasets/commands/dummy_data.py\", line 324, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/datasets/commands/dummy_data.py\", line 407, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```",
"> Feel free to host the images online (on imgur for example) :)\r\n\r\nSure. Will do that.\r\n\r\nThanks for the explanation regarding the dummy data zip files. I will try it out and let you know.",
"Instead of uploading the images to a hosting service, you can directly reference their GitHub URLs (open the image in the MetaShift repo -> click Download -> copy the image URL). For instance, this is the URL of one of the images:`https://raw.githubusercontent.com/Weixin-Liang/MetaShift/main/docs/figures/Cat-MetaGraph.jpg`. Also, feel free to replace `main` with the most recent commit hash in the copied URLs to make them more robust.",
"@mariosasko I've actually created metagraphs for all the default classes other than those present in the GitHub Repo and included all of them. :) The Repo has them only for two classes.\r\n\r\nIn case we want to limit the no.of meta graphs included, we can stick to the github URLs from the repo itself.\r\n",
"Update : \r\n- I could add the dummy data and get the dummy data test to work. Since we have a preprocessing step on the dataset, one of the .pkl file size is on the higher side. This was done for the tests to pass. I hope that is okay. The dummy.zip file size is about 273K.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Structure in the data cards to include Data Instances when config parameters are used.\r\n\r\nPlease could you review when you get time. Thank you.",
"Thanks a lot for your suggestions, Mario. The thing I learnt from the review is that I need to make better sentence formations. I will keep this in mind. :) ",
"Thanks a lot for your support. @mariosasko and @lhoestq .\r\n\r\n> Super impressed by your work on this, congrats :)\r\n\r\nIts my first dataset contribution to the 🤗 Datasets library, I'm super excited. Thank you. :)\r\n\r\nAlso, I think we can close this request issue now, [#3813](https://github.com/huggingface/datasets/issues/3813)"
] | 2022-03-12T08:44:18Z
| 2022-04-01T16:59:48Z
| 2022-04-01T15:16:30Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3900.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3900",
"merged_at": "2022-04-01T15:16:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3900.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3900"
}
|
This PR adds the MetaShift dataset.
Dataset Request : Add MetaShift dataset [#3813](https://github.com/huggingface/datasets/issues/3813)
@lhoestq As discussed,
- I have copied the preprocessing script and modified it as required to not create new directories and folders and instead yield the images.
- I do the preprocessing in _split_generators to get the required data which is then passed to _generate_examples.
- Beyond the generated MetaShift dataset, the original preprocess script also generates the meta-graphs for each class, I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#generate-full-metashift) ]
- There is a Bonus section, the authors share. I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]
- I had a basic test script which downloaded the dataset and tested the basic functionality. Things seems fine.
For real data, I performed the following test :
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_metashift
============================================== test session starts ===============================================
platform linux -- Python 3.7.11, pytest-7.0.1, pluggy-1.0.0
rootdir: ./datasets
plugins: hydra-core-1.1.1, datadir-1.3.1, forked-1.4.0, xdist-2.5.0
collected 1 item
tests/test_dataset_common.py . [100%]
========================================= 1 passed in 4821.25s (1:20:21) =========================================
```
- I couldn't get the dummy dataset. Need some inputs here.
Error as follows :
```
Using custom data configuration default
Dataset metashift with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.
for split in generator_splits:
UnboundLocalError: local variable 'generator_splits' referenced before assignment
```
To-Do :
- [x] Currently I am using the default _SELECTED_CLASSES. I need to use config option here as suggested
- [x] Complete fields in the Dataset Card.
- [x] Tagging the dataset using the Datasets Tagging app.
Need your help and suggestions for improvement. Thank you
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3900/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3900/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3360
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3360/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3360/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3360/events
|
https://github.com/huggingface/datasets/pull/3360
| 1,068,724,697
|
PR_kwDODunzps4vQ_16
| 3,360
|
Add The Pile USPTO subset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-01T18:08:05Z
| 2021-12-03T11:45:29Z
| 2021-12-03T11:45:28Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3360.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3360",
"merged_at": "2021-12-03T11:45:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3360.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3360"
}
|
Add:
- USPTO subset of The Pile: "uspto" config
Close bigscience-workshop/data_tooling#297.
CC: @StellaAthena
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3360/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3360/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4233
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4233/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4233/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4233/events
|
https://github.com/huggingface/datasets/pull/4233
| 1,216,665,044
|
PR_kwDODunzps421r-6
| 4,233
|
Autoeval
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nazneenrajani",
"id": 3278583,
"login": "nazneenrajani",
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nazneenrajani"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4233). All of your documentation changes will be reflected on that endpoint."
] | 2022-04-27T01:32:09Z
| 2022-04-27T05:29:30Z
| 2022-04-27T01:32:23Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4233",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4233"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4233/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4233/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1072
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1072/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1072/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1072/events
|
https://github.com/huggingface/datasets/pull/1072
| 756,454,511
|
MDExOlB1bGxSZXF1ZXN0NTMxOTk2Njky
| 1,072
|
actually uses the previously declared VERSION on the configs in the template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-03T18:44:27Z
| 2020-12-03T19:35:46Z
| 2020-12-03T19:35:46Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1072",
"merged_at": "2020-12-03T19:35:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1072"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1072/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1072/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/1729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1729/events
|
https://github.com/huggingface/datasets/issues/1729
| 784,565,898
|
MDU6SXNzdWU3ODQ1NjU4OTg=
| 1,729
|
Is there support for Deep learning datasets?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4",
"events_url": "https://api.github.com/users/pablodz/events{/privacy}",
"followers_url": "https://api.github.com/users/pablodz/followers",
"following_url": "https://api.github.com/users/pablodz/following{/other_user}",
"gists_url": "https://api.github.com/users/pablodz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pablodz",
"id": 28235457,
"login": "pablodz",
"node_id": "MDQ6VXNlcjI4MjM1NDU3",
"organizations_url": "https://api.github.com/users/pablodz/orgs",
"received_events_url": "https://api.github.com/users/pablodz/received_events",
"repos_url": "https://api.github.com/users/pablodz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pablodz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pablodz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pablodz"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @ZurMaD!\r\nThanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) guide. New datasets are always very much appreciated 🚀\r\n"
] | 2021-01-12T20:22:41Z
| 2021-03-31T04:24:07Z
| 2021-03-31T04:24:07Z
|
NONE
| null | null | null |
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1729/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4526
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4526/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4526/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4526/events
|
https://github.com/huggingface/datasets/issues/4526
| 1,276,580,185
|
I_kwDODunzps5MFxFZ
| 4,526
|
split cache used when processing different split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gpucce",
"id": 32967787,
"login": "gpucce",
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"repos_url": "https://api.github.com/users/gpucce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gpucce"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)",
"Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE"
] | 2022-06-20T08:44:58Z
| 2022-06-28T14:04:58Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
def train_dataloader(self):
ds = load_dataset('squad', split='train')
ds = ds.map(some_function)
return [ds]
def val_dataloader(self):
ds = load_dataset('squad', split="validation")
ds = ds.map(some_function)
return [ds]
```
I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue.
If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4526/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4526/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4807
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4807/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4807/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4807/events
|
https://github.com/huggingface/datasets/pull/4807
| 1,332,784,110
|
PR_kwDODunzps483MSH
| 4,807
|
document fix in opus_gnome dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gojiteji",
"id": 38291975,
"login": "gojiteji",
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gojiteji"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Duplicate:\r\n- #4806 "
] | 2022-08-09T06:38:13Z
| 2022-08-09T07:28:03Z
| 2022-08-09T07:28:03Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4807",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4807"
}
|
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4807/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4807/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2412
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2412/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2412/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2412/events
|
https://github.com/huggingface/datasets/issues/2412
| 903,769,151
|
MDU6SXNzdWU5MDM3NjkxNTE=
| 2,412
|
Docstring mistake: dataset vs. metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> I can provide a PR l8er...\r\n\r\nSee #2425 "
] | 2021-05-27T13:39:11Z
| 2021-06-01T08:18:04Z
| 2021-06-01T08:18:04Z
|
CONTRIBUTOR
| null | null | null |
This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er...
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2412/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2412/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1671/events
|
https://github.com/huggingface/datasets/issues/1671
| 776,652,193
|
MDU6SXNzdWU3NzY2NTIxOTM=
| 1,671
|
connection issue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\r\n\r\ncould you tell me how I can make the two datastes the same format @lhoestq \r\n\r\n",
"> `\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))`\r\n\r\nDo you have an internet connection on the machine ? Is there a proxy that might block requests to aws ?\r\n\r\n> I tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks\r\n\r\nHF_HOME is used to specify the directory for the cache files of this library.\r\nYou can use save_to_disk and load_from_disk without changing the HF_HOME:\r\n```python\r\nimdb = datasets.load_dataset(\"imdb\")\r\nimdb.save_to_disk(\"/idiap/temp/rkarimi/hf_datasets/imdb\")\r\nimdb = datasets.load_from_disk(\"/idiap/temp/rkarimi/hf_datasets/imdb\")\r\n```\r\n\r\n> could you tell me how I can make the two datastes the same format\r\n\r\nIndeed they returns different things:\r\n- `load_dataset` returns a `Dataset` object if the split is specified, or a `DatasetDict` if no split is given. Therefore `load_datasets(\"imdb\", split=\"train[10]\")` returns a `Dataset` object containing 10 elements.\r\n- doing `d[\"train\"][:10]` on a DatasetDict \"d\" gets the train split `d[\"train\"]` as a `Dataset` object and then gets the first 10 elements as a dictionary"
] | 2020-12-30T21:56:20Z
| 2022-10-05T12:42:12Z
| 2022-10-05T12:42:12Z
|
NONE
| null | null | null |
Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library reads from, could you assist me how this can be done, thanks
I tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks
```
imdb = datasets.load_dataset("imdb")
imdb.save_to_disk("/idiap/temp/rkarimi/hf_datasets/imdb")
>>> os.environ["HF_HOME"]="/idiap/temp/rkarimi/hf_datasets/"
>>> imdb = datasets.load_dataset("imdb")
Reusing dataset imdb (/idiap/temp/rkarimi/cache_home_2/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3)
```
I tried afterwards to set HF_HOME in bash, this makes it read from it, but it cannot let dataset library load from the saved path and still downloading data. could you tell me how to fix this issue @lhoestq thanks
Also this is on cloud, so I save them in a path, copy it to "another machine" to load the data
### Error stack
```
Traceback (most recent call last):
File "./finetune_t5_trainer.py", line 344, in <module>
main()
File "./finetune_t5_trainer.py", line 232, in main
for task in data_args.eval_tasks} if training_args.do_test else None
File "./finetune_t5_trainer.py", line 232, in <dictcomp>
for task in data_args.eval_tasks} if training_args.do_test else None
File "/workdir/seq2seq/data/tasks.py", line 136, in get_dataset
split = self.get_sampled_split(split, n_obs)
File "/workdir/seq2seq/data/tasks.py", line 64, in get_sampled_split
dataset = self.load_dataset(split)
File "/workdir/seq2seq/data/tasks.py", line 454, in load_dataset
split=split, script_version="master")
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3
return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head
url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head
return request('head', url, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1671/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1196
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1196/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1196/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1196/events
|
https://github.com/huggingface/datasets/pull/1196
| 757,894,920
|
MDExOlB1bGxSZXF1ZXN0NTMzMTc0NjU2
| 1,196
|
Add IWSLT'15 English-Vietnamese machine translation Data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4",
"events_url": "https://api.github.com/users/Nilanshrajput/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilanshrajput/followers",
"following_url": "https://api.github.com/users/Nilanshrajput/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilanshrajput/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilanshrajput",
"id": 28673745,
"login": "Nilanshrajput",
"node_id": "MDQ6VXNlcjI4NjczNzQ1",
"organizations_url": "https://api.github.com/users/Nilanshrajput/orgs",
"received_events_url": "https://api.github.com/users/Nilanshrajput/received_events",
"repos_url": "https://api.github.com/users/Nilanshrajput/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilanshrajput/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilanshrajput/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilanshrajput"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks ! feel free to ping me once you've added the tags in the dataset card :) ",
"merging since the CI is fixed on master"
] | 2020-12-06T10:36:31Z
| 2020-12-11T18:26:51Z
| 2020-12-11T18:26:51Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1196",
"merged_at": "2020-12-11T18:26:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1196"
}
|
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1196/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1196/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2985/events
|
https://github.com/huggingface/datasets/pull/2985
| 1,010,500,433
|
PR_kwDODunzps4sbbbo
| 2,985
|
add new dataset kan_hope
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adeepH",
"id": 46108405,
"login": "adeepH",
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"repos_url": "https://api.github.com/users/adeepH/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adeepH"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-09-29T05:20:28Z
| 2021-10-01T16:55:19Z
| 2021-10-01T16:55:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2985",
"merged_at": "2021-10-01T16:55:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2985"
}
|
## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Task:** *Binary Text Classification*
- **Paper:** *https://arxiv.org/abs/2108.04616*
- **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2985/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2985/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1012
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1012/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1012/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1012/events
|
https://github.com/huggingface/datasets/pull/1012
| 755,485,658
|
MDExOlB1bGxSZXF1ZXN0NTMxMTg3MTI2
| 1,012
|
Adding Evidence Inference Data:
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-02T17:51:35Z
| 2020-12-03T15:04:46Z
| 2020-12-03T15:04:46Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1012.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1012",
"merged_at": "2020-12-03T15:04:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1012.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1012"
}
|
http://evidence-inference.ebm-nlp.com/download/
https://arxiv.org/pdf/2005.04177.pdf
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1012/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1012/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3181
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3181/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3181/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3181/events
|
https://github.com/huggingface/datasets/issues/3181
| 1,039,682,097
|
I_kwDODunzps49-Eox
| 3,181
|
`None` converted to `"None"` when loading a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eladsegal",
"id": 13485709,
"login": "eladsegal",
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eladsegal"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[
"Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r\n\r\nIt is true that strings were an exception, but this was recently fixed by @lhoestq (see #3158).",
"Thanks for reporting.\r\n\r\nThis is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ?\r\n\r\nEDIT: the other types (bool, int, etc) can also become nullable IMO",
"So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`.\r\nUsing the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all.",
"Hi @eladsegal,\r\n\r\nUse `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e\r\n```\r\n\r\nI'm making all the feature types nullable as we speak, and the fix will be merged probably early next week.",
"Hi @mariosasko, is there an estimation as to when this issue will be fixed?",
"https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :)\r\n\r\nFor now feel free to install `datasets` from the master branch",
"Thanks, but unfortunately looks like it isn't fixed yet 😢 \r\n[notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing)\r\n[notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing)",
"Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick.",
"Thank you, it works! 🎊 "
] | 2021-10-29T15:23:53Z
| 2021-12-11T01:16:40Z
| 2021-12-09T14:26:57Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text"]["section_name"])
```
When installing version 1.1.40, the output is
`[None, 'Introduction', 'Benchmark Datasets', ...]`
When installing from the master branch, the output is
`['None', 'Introduction', 'Benchmark Datasets', ...]`
Notice how the first element was changed from `NoneType` to `str`.
## Expected results
`None` should stay as is.
## Actual results
`None` is converted to a string.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: master
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3181/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3181/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1638
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1638/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1638/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1638/events
|
https://github.com/huggingface/datasets/pull/1638
| 774,869,184
|
MDExOlB1bGxSZXF1ZXN0NTQ1Njg5ODQ5
| 1,638
|
Add id_puisi dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31740013?v=4",
"events_url": "https://api.github.com/users/ilhamfp/events{/privacy}",
"followers_url": "https://api.github.com/users/ilhamfp/followers",
"following_url": "https://api.github.com/users/ilhamfp/following{/other_user}",
"gists_url": "https://api.github.com/users/ilhamfp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ilhamfp",
"id": 31740013,
"login": "ilhamfp",
"node_id": "MDQ6VXNlcjMxNzQwMDEz",
"organizations_url": "https://api.github.com/users/ilhamfp/orgs",
"received_events_url": "https://api.github.com/users/ilhamfp/received_events",
"repos_url": "https://api.github.com/users/ilhamfp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ilhamfp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ilhamfp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ilhamfp"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-26T12:41:55Z
| 2020-12-30T16:34:17Z
| 2020-12-30T16:34:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1638.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1638",
"merged_at": "2020-12-30T16:34:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1638.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1638"
}
|
Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1638/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1638/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3635
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3635/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3635/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3635/events
|
https://github.com/huggingface/datasets/pull/3635
| 1,115,333,219
|
PR_kwDODunzps4xobAe
| 3,635
|
Make `ted_talks_iwslt` dataset streamable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for adding this @mariosasko! It worked for me when running it with a local data file, however, when using the file on Google Drive I get the following error:\r\n```Python\r\nds = load_dataset(\"./ted_talks_iwslt\",\"eu_ca_2014\", streaming=True, split=\"train\", use_auth_token=True)\r\nnext(iter(ds))\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/implementations/http.py:383, in HTTPFileSystem._info(self, url, **kwargs)\r\n 381 try:\r\n 382 info.update(\r\n--> 383 await _file_info(\r\n 384 url,\r\n 385 size_policy=policy,\r\n 386 session=session,\r\n 387 **self.kwargs,\r\n 388 **kwargs,\r\n 389 )\r\n 390 )\r\n 391 if info.get(\"size\") is not None:\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/implementations/http.py:734, in _file_info(url, session, size_policy, **kwargs)\r\n 733 async with r:\r\n--> 734 r.raise_for_status()\r\n 736 # TODO:\r\n 737 # recognise lack of 'Accept-Ranges',\r\n 738 # or 'Accept-Ranges': 'none' (not 'bytes')\r\n 739 # to mean streaming only, no random access => return None\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/aiohttp/client_reqrep.py:1004, in ClientResponse.raise_for_status(self)\r\n 1003 self.release()\r\n-> 1004 raise ClientResponseError(\r\n 1005 self.request_info,\r\n 1006 self.history,\r\n 1007 status=self.status,\r\n 1008 message=self.reason,\r\n 1009 headers=self.headers,\r\n 1010 )\r\n\r\nClientResponseError: 403, message='Forbidden', url=URL('https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download&confirm=1RJz')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [9], in <module>\r\n 1 iterable = iter(ds)\r\n 2 for i in range(10):\r\n----> 3 item = next(iterable)\r\n 4 print(item['text'][:10], item['meta'])\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/datasets/iterable_dataset.py:341, in IterableDataset.__iter__(self)\r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n 344 encoded_example = self.features.encode_example(example)\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/datasets/iterable_dataset.py:338, in IterableDataset._iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/datasets/iterable_dataset.py:78, in ExamplesIterable.__iter__(self)\r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/lm_en_ted_talks_iwslt/756148758e86e64a350f9b320744a2bd5ed5cff74f7df620763a2b5e1a45e6c6/lm_en_ted_talks_iwslt.py:118, in TedTalksIWSLT._generate_examples(self, files)\r\n 116 for _LANG in _LANG_CODES:\r\n 117 source_file_path = _YEAR_FOLDER[year] + \"/ted_\" + _LANG + _YEAR[year] + \".zip\"\r\n--> 118 for path, file in files:\r\n 119 if path.endswith(source_file_path):\r\n 120 source_talks, _ = parse_zip_file(path, file.read())\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py:596, in StreamingDownloadManager.iter_archive(self, urlpath_or_buf)\r\n 594 yield from _iter_archive(urlpath_or_buf)\r\n 595 else:\r\n--> 596 with xopen(urlpath_or_buf, \"rb\", use_auth_token=self.download_config.use_auth_token) as f:\r\n 597 yield from _iter_archive(f)\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py:296, in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 294 new_kwargs = {}\r\n 295 kwargs = {**kwargs, **new_kwargs}\r\n--> 296 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n 297 _add_retries_to_file_obj_read_method(file_obj)\r\n 298 return file_obj\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/implementations/http.py:343, in HTTPFileSystem._open(self, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 341 kw[\"asynchronous\"] = self.asynchronous\r\n 342 kw.update(kwargs)\r\n--> 343 size = size or self.info(path, **kwargs)[\"size\"]\r\n 344 session = sync(self.loop, self.set_session)\r\n 345 if block_size and size:\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/asyn.py:91, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 88 @functools.wraps(func)\r\n 89 def wrapper(*args, **kwargs):\r\n 90 self = obj or args[0]\r\n---> 91 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/asyn.py:71, in sync(loop, func, timeout, *args, **kwargs)\r\n 69 raise FSTimeoutError from return_result\r\n 70 elif isinstance(return_result, BaseException):\r\n---> 71 raise return_result\r\n 72 else:\r\n 73 return return_result\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/asyn.py:25, in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\nFile ~/git/bigscience-datasets/env/lib/python3.9/site-packages/fsspec/implementations/http.py:396, in HTTPFileSystem._info(self, url, **kwargs)\r\n 393 except Exception as exc:\r\n 394 if policy == \"get\":\r\n 395 # If get failed, then raise a FileNotFoundError\r\n--> 396 raise FileNotFoundError(url) from exc\r\n 397 logger.debug(str(exc))\r\n 399 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download&confirm=1RJz\r\n```",
"Thanks @mariosasko.\r\n\r\nTo make this dataset streamable, we should first host the data on the Hub instead of current Google Drive. Do you know if their license allows to do so? ",
"This dataset is licensed under [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/), so I think it should be"
] | 2022-01-26T18:07:56Z
| 2022-10-04T09:36:23Z
| 2022-10-03T09:44:47Z
|
CONTRIBUTOR
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3635.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3635",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3635.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3635"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3635/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3635/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4596/events
|
https://github.com/huggingface/datasets/issues/4596
| 1,288,381,735
|
I_kwDODunzps5MyyUn
| 4,596
|
Dataset Viewer issue for universal_dependencies
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4",
"events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}",
"followers_url": "https://api.github.com/users/Jordy-VL/followers",
"following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}",
"gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jordy-VL",
"id": 16034009,
"login": "Jordy-VL",
"node_id": "MDQ6VXNlcjE2MDM0MDA5",
"organizations_url": "https://api.github.com/users/Jordy-VL/orgs",
"received_events_url": "https://api.github.com/users/Jordy-VL/received_events",
"repos_url": "https://api.github.com/users/Jordy-VL/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jordy-VL"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null |
[
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n"
] | 2022-06-29T08:50:29Z
| 2022-09-07T11:29:28Z
| 2022-09-07T11:29:27Z
|
NONE
| null | null | null |
### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4596/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/158
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/158/comments
|
https://api.github.com/repos/huggingface/datasets/issues/158/events
|
https://github.com/huggingface/datasets/pull/158
| 620,396,658
|
MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy
| 158
|
add Toronto Books Corpus
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-18T17:54:45Z
| 2020-06-11T07:49:15Z
| 2020-05-19T07:34:56Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/158",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/158"
}
|
This PR adds the Toronto Books Corpus.
.
It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/158/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/158/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3736
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3736/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3736/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3736/events
|
https://github.com/huggingface/datasets/pull/3736
| 1,140,134,483
|
PR_kwDODunzps4y7rMR
| 3,736
|
Local paths in common voice
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I just changed to `dl_manager.is_streaming` rather than an additional parameter `streaming` that has to be handled by the DatasetBuilder class - this way the streaming logic doesn't interfere with the base builder's code.\r\n\r\nI think it's better this way, but let me know if you preferred the previous way and I can revert\r\n\r\n> But on the other hand, IMHO, I think this specific solution adds complexity to handling streaming/non-streaming, and moves this complexity to the loading script and thus to the contributors/users who want to create the loading script for their canonical/community datasets (instead of keeping it hidden form the end users).\r\n\r\nI'm down to discuss this more in the future !",
"@lhoestq good idea: much cleaner this way! That way each class has its own responsibilities without mixing around..."
] | 2022-02-16T15:01:29Z
| 2022-09-21T14:58:38Z
| 2022-02-22T09:13:43Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3736.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3736",
"merged_at": "2022-02-22T09:13:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3736.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3736"
}
|
Continuation of https://github.com/huggingface/datasets/pull/3664:
- pass the `streaming` parameter to _split_generator
- update @anton-l's code to use this parameter for `common_voice`
- add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming
Now the `common_voice` dataset has a local path back in `ds["path"]`, and this field is `None` in streaming mode.
cc @patrickvonplaten @anton-l @albertvillanova
Fix #3663.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3736/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3736/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/8
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8/events
|
https://github.com/huggingface/datasets/pull/8
| 601,783,243
|
MDExOlB1bGxSZXF1ZXN0NDA0OTg0NDUz
| 8
|
Fix issue 6: error when the citation is missing in the DatasetInfo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jplu",
"id": 959590,
"login": "jplu",
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"repos_url": "https://api.github.com/users/jplu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jplu"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-04-17T08:04:26Z
| 2020-04-29T09:27:11Z
| 2020-04-20T13:24:12Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/8.diff",
"html_url": "https://github.com/huggingface/datasets/pull/8",
"merged_at": "2020-04-20T13:24:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/8.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2187
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2187/events
|
https://github.com/huggingface/datasets/issues/2187
| 852,939,736
|
MDU6SXNzdWU4NTI5Mzk3MzY=
| 2,187
|
Question (potential issue?) related to datasets caching
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
open
| false
| null |
[] | null |
[
"An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ",
"Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn’t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().",
"Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ",
"It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```",
"I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ",
"Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~/.cache/huggingface/datasets/<dataset_name>/<config_id>/<version> directory.\r\n\r\n> What information is used to create the directory/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.",
"Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ",
"That makes total sense indeed !\r\nI think we can do the change",
"I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and/or file access/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!",
"I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot/caching of the dataset. ",
"We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?",
"I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. ",
"Hi! I have the same challenge with caching, where the **.cache** folder is required even though it isn't possible for me.\r\n\r\nI'd like to run transformers in Snowflake, using Snowpark for Python, this would mean I could provide configurable transformers in real-time for business users without having data leave an environment (for security reasons). With no need for data transfer,n the compute is faster. It is a large use case - is it possible to entirely disable caching in certain scenarios?\r\n@lhoestq ?\r\n",
"You can try to change the location of the cache folder using the `HF_CACHE_HOME` environment variable, and set a location where you have read/write access.",
"Thanks @lhoestq \r\n\r\nI wanted to do that, however, snowflake does not allow it to write at all. I'm asking around to see if they can help me out with that issue 😅"
] | 2021-04-08T00:16:28Z
| 2023-01-03T18:30:38Z
| null |
NONE
| null | null | null |
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5494
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5494/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5494/events
|
https://github.com/huggingface/datasets/issues/5494
| 1,566,655,348
|
I_kwDODunzps5dYUN0
| 5,494
|
Update audio installation doc page
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[
"Totally agree, the docs should be in sync with our code.\r\n\r\nIndeed to avoid confusing users, I think we should have updated the docs at the same time as this PR:\r\n- #5167",
"@albertvillanova yeah sure I should have, but I forgot back then, sorry for that 😶",
"No, @polinaeterna, nothing to be sorry about.\r\n\r\nMy comment was for all of us datasets team, as a reminder: when making a PR, but also when reviewing some other's PR, we should not forget to update the corresponding docstring and doc pages. It is something we can improve if we help each other in reminding about it... :hugs: ",
"@polinaeterna I think we can close this issue now as we no longer use `torchaudio` for decoding."
] | 2023-02-01T19:07:50Z
| 2023-03-02T16:08:17Z
| 2023-03-02T16:08:17Z
|
CONTRIBUTOR
| null | null | null |
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327
So we should update the doc page. But first investigate [this issue](5488).
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5494/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3955
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3955/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3955/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3955/events
|
https://github.com/huggingface/datasets/pull/3955
| 1,172,246,647
|
PR_kwDODunzps40l5kG
| 3,955
|
Remove unncessary 'pylint disable' message in ReadMe
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4",
"events_url": "https://api.github.com/users/Datta0/events{/privacy}",
"followers_url": "https://api.github.com/users/Datta0/followers",
"following_url": "https://api.github.com/users/Datta0/following{/other_user}",
"gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Datta0",
"id": 39181234,
"login": "Datta0",
"node_id": "MDQ6VXNlcjM5MTgxMjM0",
"organizations_url": "https://api.github.com/users/Datta0/orgs",
"received_events_url": "https://api.github.com/users/Datta0/received_events",
"repos_url": "https://api.github.com/users/Datta0/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Datta0/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Datta0"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-03-17T11:16:55Z
| 2022-04-12T14:28:35Z
| 2022-04-12T14:28:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3955.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3955",
"merged_at": "2022-04-12T14:28:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3955.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3955"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3955/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3955/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3872/events
|
https://github.com/huggingface/datasets/issues/3872
| 1,163,853,026
|
I_kwDODunzps5FXvzi
| 3,872
|
HTTP error 504 Server Error: Gateway Time-out
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/83509215?v=4",
"events_url": "https://api.github.com/users/illiyas-sha/events{/privacy}",
"followers_url": "https://api.github.com/users/illiyas-sha/followers",
"following_url": "https://api.github.com/users/illiyas-sha/following{/other_user}",
"gists_url": "https://api.github.com/users/illiyas-sha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/illiyas-sha",
"id": 83509215,
"login": "illiyas-sha",
"node_id": "MDQ6VXNlcjgzNTA5MjE1",
"organizations_url": "https://api.github.com/users/illiyas-sha/orgs",
"received_events_url": "https://api.github.com/users/illiyas-sha/received_events",
"repos_url": "https://api.github.com/users/illiyas-sha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/illiyas-sha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/illiyas-sha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/illiyas-sha"
}
|
[] |
closed
| false
| null |
[] | null |
[
"is pushing directly with git (and git-lfs) an option for you?",
"I have installed git-lfs and doing this push with that\r\n",
"yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?",
"Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line",
"cc @lhoestq @albertvillanova @LysandreJik because maybe I'm giving dumb advice here 😅 ",
"`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.\r\n\r\nRegarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`."
] | 2022-03-09T12:03:37Z
| 2022-03-15T16:19:50Z
| 2022-03-15T16:19:50Z
|
NONE
| null | null | null |
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3872/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2360
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2360/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2360/events
|
https://github.com/huggingface/datasets/issues/2360
| 891,965,964
|
MDU6SXNzdWU4OTE5NjU5NjQ=
| 2,360
|
Automatically detect datasets with compatible task schemas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[] | 2021-05-14T14:23:40Z
| 2021-05-14T14:23:40Z
| null |
MEMBER
| null | null | null |
See description of #2255 for details.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2360/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3365
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3365/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3365/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3365/events
|
https://github.com/huggingface/datasets/issues/3365
| 1,069,195,887
|
I_kwDODunzps4_uqJv
| 3,365
|
Add task tags for multimodal datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"The Hub pulls these tags from [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts) (allows multimodal tasks) now, so I'm closing this issue."
] | 2021-12-02T06:58:20Z
| 2023-07-25T18:21:33Z
| 2023-07-25T18:21:32Z
|
MEMBER
| null | null | null |
## **Is your feature request related to a problem? Please describe.**
Currently, task tags are either exclusively related to text or speech processing:
- https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json
## **Describe the solution you'd like**
We should also add tasks related to:
- multimodality
- image
- video
CC: @VictorSanh @lewtun @lhoestq @merveenoyan @SBrandeis
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3365/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3365/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4486
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4486/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4486/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4486/events
|
https://github.com/huggingface/datasets/pull/4486
| 1,269,518,084
|
PR_kwDODunzps45kP88
| 4,486
|
Add CCAgT dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johnnv1",
"id": 20444345,
"login": "johnnv1",
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johnnv1"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?",
"I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?",
"Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation",
"I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)",
"Ok, I can do that in the next few days. I will create a `lapix` organization, and I will add this dataset and also #4565",
"So, I think we can close this PR? I have already moved these files there.\r\n\r\nThe link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT\r\n\r\n🤗 ",
"Woohoo awesome !\r\n\r\nclosing this PR :)"
] | 2022-06-13T14:20:19Z
| 2022-07-04T14:37:03Z
| 2022-07-04T14:25:45Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4486.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4486",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4486.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4486"
}
|
As described in #4075
I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4486/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4486/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3564
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3564/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3564/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3564/events
|
https://github.com/huggingface/datasets/pull/3564
| 1,099,214,403
|
PR_kwDODunzps4wzSOL
| 3,564
|
Add the KMWP & DKTC dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sooftware",
"id": 42150335,
"login": "sooftware",
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"repos_url": "https://api.github.com/users/sooftware/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sooftware"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I reflect your review. cc. @lhoestq ",
"Ah sorry, I missed KMWP comment, wait.",
"I request 2 new pull requests. #3569 #3570"
] | 2022-01-11T14:14:08Z
| 2022-01-12T15:33:49Z
| 2022-01-12T15:33:28Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3564",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3564"
}
|
Add the DKTC dataset.
- https://github.com/tunib-ai/DKTC
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3564/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3564/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3305
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3305/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3305/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3305/events
|
https://github.com/huggingface/datasets/pull/3305
| 1,059,161,000
|
PR_kwDODunzps4uzZWv
| 3,305
|
asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py``
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ishan-Kumar2",
"id": 46553104,
"login": "Ishan-Kumar2",
"node_id": "MDQ6VXNlcjQ2NTUzMTA0",
"organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs",
"received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events",
"repos_url": "https://api.github.com/users/Ishan-Kumar2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ishan-Kumar2"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-20T14:51:23Z
| 2021-11-22T18:24:32Z
| 2021-11-22T17:08:13Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3305.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3305",
"merged_at": "2021-11-22T17:08:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3305.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3305"
}
|
Addresses #3171
Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3305/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3305/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1650/events
|
https://github.com/huggingface/datasets/pull/1650
| 775,545,912
|
MDExOlB1bGxSZXF1ZXN0NTQ2MjA0MzYy
| 1,650
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4",
"events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}",
"followers_url": "https://api.github.com/users/MisbahKhan789/followers",
"following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}",
"gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MisbahKhan789",
"id": 15351802,
"login": "MisbahKhan789",
"node_id": "MDQ6VXNlcjE1MzUxODAy",
"organizations_url": "https://api.github.com/users/MisbahKhan789/orgs",
"received_events_url": "https://api.github.com/users/MisbahKhan789/received_events",
"repos_url": "https://api.github.com/users/MisbahKhan789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MisbahKhan789"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-28T19:09:05Z
| 2020-12-29T10:43:14Z
| 2020-12-29T10:43:14Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1650.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1650",
"merged_at": "2020-12-29T10:43:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1650.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1650"
}
|
added dataset summary
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1650/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1650/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2925/events
|
https://github.com/huggingface/datasets/pull/2925
| 997,407,034
|
PR_kwDODunzps4rzJ9s
| 2,925
|
Add tutorial for no-code dataset upload
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[
"Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```",
"Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet",
"I just added a mention of the login for private datasets. Don't hesitate to edit or comment.\r\n\r\nOtherwise I think it's all good, feel free to merge it @stevhliu if you don't have other changes to make :)"
] | 2021-09-15T18:54:42Z
| 2021-09-27T17:51:55Z
| 2021-09-27T17:51:55Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2925",
"merged_at": "2021-09-27T17:51:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2925"
}
|
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2925/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5484
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5484/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5484/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5484/events
|
https://github.com/huggingface/datasets/pull/5484
| 1,562,877,070
|
PR_kwDODunzps5I1oaq
| 5,484
|
Update docs for `nyu_depth_v2` dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awsaf49",
"id": 36858976,
"login": "awsaf49",
"node_id": "MDQ6VXNlcjM2ODU4OTc2",
"organizations_url": "https://api.github.com/users/awsaf49/orgs",
"received_events_url": "https://api.github.com/users/awsaf49/received_events",
"repos_url": "https://api.github.com/users/awsaf49/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awsaf49"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think I need to create another PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets for hosting the images there?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the update @awsaf49 !",
"> Thanks a lot for the updates!\r\n> \r\n> Just some minor things remain and the we should be good to ship this 🚀\r\n\r\n@sayakpaul I have updated the minor things. Please approve the workflows",
"I think this PR is good to go..\r\n@sayakpaul @lhoestq ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009064 / 0.011353 (-0.002289) | 0.005262 / 0.011008 (-0.005746) | 0.099608 / 0.038508 (0.061100) | 0.035015 / 0.023109 (0.011906) | 0.296501 / 0.275898 (0.020602) | 0.353619 / 0.323480 (0.030139) | 0.007903 / 0.007986 (-0.000083) | 0.004093 / 0.004328 (-0.000235) | 0.075260 / 0.004250 (0.071009) | 0.043142 / 0.037052 (0.006089) | 0.307755 / 0.258489 (0.049266) | 0.336340 / 0.293841 (0.042499) | 0.038596 / 0.128546 (-0.089950) | 0.011861 / 0.075646 (-0.063786) | 0.334226 / 0.419271 (-0.085045) | 0.051472 / 0.043533 (0.007940) | 0.298539 / 0.255139 (0.043400) | 0.316856 / 0.283200 (0.033656) | 0.108620 / 0.141683 (-0.033063) | 1.434901 / 1.452155 (-0.017254) | 1.468368 / 1.492716 (-0.024348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208402 / 0.018006 (0.190395) | 0.445799 / 0.000490 (0.445309) | 0.003704 / 0.000200 (0.003504) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025435 / 0.037411 (-0.011976) | 0.105874 / 0.014526 (0.091348) | 0.115652 / 0.176557 (-0.060905) | 0.150872 / 0.737135 (-0.586263) | 0.121705 / 0.296338 (-0.174633) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397816 / 0.215209 (0.182607) | 3.977766 / 2.077655 (1.900111) | 1.850848 / 1.504120 (0.346728) | 1.686062 / 1.541195 (0.144867) | 1.786277 / 1.468490 (0.317787) | 0.696250 / 4.584777 (-3.888527) | 3.785255 / 3.745712 (0.039543) | 3.355013 / 5.269862 (-1.914849) | 1.818232 / 4.565676 (-2.747444) | 0.085408 / 0.424275 (-0.338867) | 0.012567 / 0.007607 (0.004960) | 0.524185 / 0.226044 (0.298140) | 5.061975 / 2.268929 (2.793047) | 2.299866 / 55.444624 (-53.144758) | 1.966709 / 6.876477 (-4.909768) | 2.018760 / 2.142072 (-0.123313) | 0.841341 / 4.805227 (-3.963886) | 0.166374 / 6.500664 (-6.334290) | 0.061854 / 0.075469 (-0.013615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221666 / 1.841788 (-0.620122) | 14.373194 / 8.074308 (6.298886) | 14.253614 / 10.191392 (4.062222) | 0.172979 / 0.680424 (-0.507445) | 0.029176 / 0.534201 (-0.505025) | 0.447399 / 0.579283 (-0.131884) | 0.443663 / 0.434364 (0.009299) | 0.537071 / 0.540337 (-0.003267) | 0.640539 / 1.386936 (-0.746397) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.005091 / 0.011008 (-0.005917) | 0.074588 / 0.038508 (0.036080) | 0.032391 / 0.023109 (0.009282) | 0.340548 / 0.275898 (0.064650) | 0.367159 / 0.323480 (0.043679) | 0.005594 / 0.007986 (-0.002392) | 0.004003 / 0.004328 (-0.000325) | 0.073946 / 0.004250 (0.069695) | 0.045921 / 0.037052 (0.008868) | 0.340245 / 0.258489 (0.081756) | 0.397958 / 0.293841 (0.104117) | 0.036539 / 0.128546 (-0.092007) | 0.012258 / 0.075646 (-0.063388) | 0.087406 / 0.419271 (-0.331865) | 0.049276 / 0.043533 (0.005743) | 0.345235 / 0.255139 (0.090096) | 0.361250 / 0.283200 (0.078050) | 0.100757 / 0.141683 (-0.040926) | 1.464644 / 1.452155 (0.012489) | 1.545852 / 1.492716 (0.053136) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222952 / 0.018006 (0.204945) | 0.434607 / 0.000490 (0.434117) | 0.000438 / 0.000200 (0.000238) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028834 / 0.037411 (-0.008577) | 0.107523 / 0.014526 (0.092997) | 0.122077 / 0.176557 (-0.054479) | 0.156574 / 0.737135 (-0.580561) | 0.122917 / 0.296338 (-0.173421) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417292 / 0.215209 (0.202083) | 4.165980 / 2.077655 (2.088325) | 1.996731 / 1.504120 (0.492611) | 1.802946 / 1.541195 (0.261751) | 1.878456 / 1.468490 (0.409966) | 0.711035 / 4.584777 (-3.873742) | 3.847357 / 3.745712 (0.101644) | 2.088354 / 5.269862 (-3.181508) | 1.344763 / 4.565676 (-3.220913) | 0.086356 / 0.424275 (-0.337919) | 0.012530 / 0.007607 (0.004923) | 0.511693 / 0.226044 (0.285648) | 5.126093 / 2.268929 (2.857165) | 2.490023 / 55.444624 (-52.954602) | 2.180274 / 6.876477 (-4.696202) | 2.221511 / 2.142072 (0.079438) | 0.836348 / 4.805227 (-3.968879) | 0.169554 / 6.500664 (-6.331110) | 0.064555 / 0.075469 (-0.010914) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293466 / 1.841788 (-0.548321) | 14.785700 / 8.074308 (6.711392) | 13.858493 / 10.191392 (3.667101) | 0.161777 / 0.680424 (-0.518646) | 0.017794 / 0.534201 (-0.516407) | 0.426286 / 0.579283 (-0.152997) | 0.422517 / 0.434364 (-0.011847) | 0.530777 / 0.540337 (-0.009560) | 0.634822 / 1.386936 (-0.752114) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-30T17:37:08Z
| 2023-09-29T06:43:11Z
| 2023-02-05T14:15:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"merged_at": "2023-02-05T14:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5484"
}
|
This PR will fix the issue mentioned in #5461. Here is brief overview,
## Bug:
Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison,

## Fix:
When I first loaded the datasets from HF I noticed it was 30GB but in DenseDepth data is only 4GB with dtype=uint8. This means data from fast-depth (before loading to HF) must have high precision. So when I tried to dig deeper by directly loading depth_map from `h5py`, I found depth_map from `h5py` came with `float32`. But when the data is processed in HF with `datasets.Image()` it was directly converted to `uint8` from `float32` hence the **discretized** depth map.
https://github.com/huggingface/datasets/blob/c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead/src/datasets/features/image.py#L91-L93
cc: @sayakpaul @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5484/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5484/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/149
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/149/comments
|
https://api.github.com/repos/huggingface/datasets/issues/149/events
|
https://github.com/huggingface/datasets/issues/149
| 619,735,739
|
MDU6SXNzdWU2MTk3MzU3Mzk=
| 149
|
[Feature request] Add Ubuntu Dialogue Corpus dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4",
"events_url": "https://api.github.com/users/danth/events{/privacy}",
"followers_url": "https://api.github.com/users/danth/followers",
"following_url": "https://api.github.com/users/danth/following{/other_user}",
"gists_url": "https://api.github.com/users/danth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danth",
"id": 28959268,
"login": "danth",
"node_id": "MDQ6VXNlcjI4OTU5MjY4",
"organizations_url": "https://api.github.com/users/danth/orgs",
"received_events_url": "https://api.github.com/users/danth/received_events",
"repos_url": "https://api.github.com/users/danth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danth"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"
] | 2020-05-17T15:42:39Z
| 2020-05-18T17:01:46Z
| 2020-05-18T17:01:46Z
|
NONE
| null | null | null |
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/149/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/786
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/786/comments
|
https://api.github.com/repos/huggingface/datasets/issues/786/events
|
https://github.com/huggingface/datasets/issues/786
| 733,761,717
|
MDU6SXNzdWU3MzM3NjE3MTc=
| 786
|
feat(dataset): multiprocessing _generate_examples
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik",
"`_generate_examples` can now be run in parallel thanks to https://github.com/huggingface/datasets/pull/5107. You can find more info [here](https://huggingface.co/docs/datasets/dataset_script#sharding)."
] | 2020-10-31T16:52:16Z
| 2023-01-16T10:59:13Z
| 2023-01-16T10:59:13Z
|
CONTRIBUTOR
| null | null | null |
forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/786/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3252
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3252/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3252/events
|
https://github.com/huggingface/datasets/pull/3252
| 1,051,124,749
|
PR_kwDODunzps4uagoy
| 3,252
|
Fix failing CER metric test in CI after update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-11T15:57:16Z
| 2021-11-12T14:06:44Z
| 2021-11-12T14:06:43Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3252",
"merged_at": "2021-11-12T14:06:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3252"
}
|
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3252/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4158
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4158/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4158/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4158/events
|
https://github.com/huggingface/datasets/pull/4158
| 1,202,376,843
|
PR_kwDODunzps42ITg3
| 4,158
|
Add AUC ROC Metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T20:53:28Z
| 2022-04-26T19:41:50Z
| 2022-04-26T19:35:22Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"merged_at": "2022-04-26T19:35:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4158"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4158/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4158/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5897/events
|
https://github.com/huggingface/datasets/pull/5897
| 1,726,135,494
|
PR_kwDODunzps5RXJaY
| 5,897
|
Fix `FixedSizeListArray` casting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006213 / 0.011353 (-0.005140) | 0.004230 / 0.011008 (-0.006778) | 0.098014 / 0.038508 (0.059506) | 0.028659 / 0.023109 (0.005550) | 0.303272 / 0.275898 (0.027374) | 0.337186 / 0.323480 (0.013706) | 0.005126 / 0.007986 (-0.002860) | 0.003563 / 0.004328 (-0.000765) | 0.075295 / 0.004250 (0.071045) | 0.036836 / 0.037052 (-0.000216) | 0.309612 / 0.258489 (0.051123) | 0.346484 / 0.293841 (0.052643) | 0.025714 / 0.128546 (-0.102832) | 0.008562 / 0.075646 (-0.067085) | 0.323475 / 0.419271 (-0.095796) | 0.044072 / 0.043533 (0.000539) | 0.308261 / 0.255139 (0.053122) | 0.330903 / 0.283200 (0.047703) | 0.091805 / 0.141683 (-0.049878) | 1.517011 / 1.452155 (0.064856) | 1.570815 / 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211265 / 0.018006 (0.193259) | 0.438860 / 0.000490 (0.438370) | 0.001127 / 0.000200 (0.000927) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023337 / 0.037411 (-0.014074) | 0.096243 / 0.014526 (0.081717) | 0.103529 / 0.176557 (-0.073028) | 0.161171 / 0.737135 (-0.575964) | 0.105904 / 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417042 / 0.215209 (0.201833) | 4.155067 / 2.077655 (2.077412) | 1.879657 / 1.504120 (0.375537) | 1.669341 / 1.541195 (0.128146) | 1.717623 / 1.468490 (0.249133) | 0.556246 / 4.584777 (-4.028531) | 3.484535 / 3.745712 (-0.261177) | 1.728845 / 5.269862 (-3.541017) | 0.997477 / 4.565676 (-3.568199) | 0.068355 / 0.424275 (-0.355920) | 0.012445 / 0.007607 (0.004837) | 0.519023 / 0.226044 (0.292978) | 5.173506 / 2.268929 (2.904577) | 2.332435 / 55.444624 (-53.112190) | 1.986348 / 6.876477 (-4.890129) | 2.076885 / 2.142072 (-0.065187) | 0.656738 / 4.805227 (-4.148489) | 0.135308 / 6.500664 (-6.365356) | 0.065486 / 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208874 / 1.841788 (-0.632914) | 13.994200 / 8.074308 (5.919892) | 14.160978 / 10.191392 (3.969586) | 0.146009 / 0.680424 (-0.534415) | 0.016573 / 0.534201 (-0.517628) | 0.356082 / 0.579283 (-0.223202) | 0.387766 / 0.434364 (-0.046598) | 0.419130 / 0.540337 (-0.121208) | 0.508634 / 1.386936 (-0.878302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004221 / 0.011008 (-0.006788) | 0.075155 / 0.038508 (0.036646) | 0.028491 / 0.023109 (0.005382) | 0.355606 / 0.275898 (0.079708) | 0.388986 / 0.323480 (0.065506) | 0.005941 / 0.007986 (-0.002044) | 0.003510 / 0.004328 (-0.000819) | 0.074905 / 0.004250 (0.070655) | 0.039111 / 0.037052 (0.002059) | 0.358492 / 0.258489 (0.100003) | 0.398763 / 0.293841 (0.104922) | 0.025535 / 0.128546 (-0.103012) | 0.008580 / 0.075646 (-0.067067) | 0.080461 / 0.419271 (-0.338811) | 0.041381 / 0.043533 (-0.002152) | 0.355498 / 0.255139 (0.100359) | 0.379163 / 0.283200 (0.095963) | 0.096450 / 0.141683 (-0.045233) | 1.503248 / 1.452155 (0.051093) | 1.595616 / 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238065 / 0.018006 (0.220058) | 0.422800 / 0.000490 (0.422311) | 0.002274 / 0.000200 (0.002074) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025746 / 0.037411 (-0.011665) | 0.103319 / 0.014526 (0.088793) | 0.112155 / 0.176557 (-0.064401) | 0.163034 / 0.737135 (-0.574101) | 0.113377 / 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440522 / 0.215209 (0.225313) | 4.398123 / 2.077655 (2.320468) | 2.143538 / 1.504120 (0.639418) | 1.946084 / 1.541195 (0.404890) | 1.996556 / 1.468490 (0.528066) | 0.550108 / 4.584777 (-4.034669) | 3.455774 / 3.745712 (-0.289938) | 2.862474 / 5.269862 (-2.407387) | 1.213446 / 4.565676 (-3.352230) | 0.067987 / 0.424275 (-0.356288) | 0.012413 / 0.007607 (0.004806) | 0.543990 / 0.226044 (0.317945) | 5.454807 / 2.268929 (3.185879) | 2.669195 / 55.444624 (-52.775429) | 2.332948 / 6.876477 (-4.543528) | 2.383870 / 2.142072 (0.241797) | 0.652017 / 4.805227 (-4.153210) | 0.135508 / 6.500664 (-6.365156) | 0.068238 / 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322669 / 1.841788 (-0.519118) | 14.368136 / 8.074308 (6.293828) | 14.167431 / 10.191392 (3.976039) | 0.159371 / 0.680424 (-0.521052) | 0.016638 / 0.534201 (-0.517563) | 0.357106 / 0.579283 (-0.222177) | 0.392491 / 0.434364 (-0.041873) | 0.419458 / 0.540337 (-0.120880) | 0.504662 / 1.386936 (-0.882274) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004185 / 0.011008 (-0.006823) | 0.096170 / 0.038508 (0.057662) | 0.029212 / 0.023109 (0.006102) | 0.315356 / 0.275898 (0.039458) | 0.335214 / 0.323480 (0.011734) | 0.005108 / 0.007986 (-0.002877) | 0.003634 / 0.004328 (-0.000694) | 0.074186 / 0.004250 (0.069936) | 0.038716 / 0.037052 (0.001663) | 0.311041 / 0.258489 (0.052551) | 0.341202 / 0.293841 (0.047361) | 0.025584 / 0.128546 (-0.102962) | 0.008499 / 0.075646 (-0.067148) | 0.318660 / 0.419271 (-0.100611) | 0.043745 / 0.043533 (0.000212) | 0.314824 / 0.255139 (0.059685) | 0.328117 / 0.283200 (0.044917) | 0.093425 / 0.141683 (-0.048258) | 1.478732 / 1.452155 (0.026578) | 1.531743 / 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203484 / 0.018006 (0.185478) | 0.416131 / 0.000490 (0.415641) | 0.007352 / 0.000200 (0.007152) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022908 / 0.037411 (-0.014503) | 0.098641 / 0.014526 (0.084115) | 0.103426 / 0.176557 (-0.073131) | 0.161658 / 0.737135 (-0.575477) | 0.106506 / 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430781 / 0.215209 (0.215572) | 4.315677 / 2.077655 (2.238022) | 2.022302 / 1.504120 (0.518182) | 1.832043 / 1.541195 (0.290849) | 1.789302 / 1.468490 (0.320812) | 0.560484 / 4.584777 (-4.024293) | 3.448204 / 3.745712 (-0.297508) | 1.725016 / 5.269862 (-3.544846) | 1.002649 / 4.565676 (-3.563027) | 0.068480 / 0.424275 (-0.355795) | 0.012617 / 0.007607 (0.005010) | 0.532291 / 0.226044 (0.306246) | 5.319352 / 2.268929 (3.050423) | 2.520730 / 55.444624 (-52.923894) | 2.213881 / 6.876477 (-4.662596) | 2.352477 / 2.142072 (0.210404) | 0.662516 / 4.805227 (-4.142711) | 0.136481 / 6.500664 (-6.364183) | 0.066597 / 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224537 / 1.841788 (-0.617251) | 13.849920 / 8.074308 (5.775612) | 14.026358 / 10.191392 (3.834966) | 0.131018 / 0.680424 (-0.549405) | 0.016756 / 0.534201 (-0.517445) | 0.358091 / 0.579283 (-0.221192) | 0.397709 / 0.434364 (-0.036655) | 0.450024 / 0.540337 (-0.090314) | 0.542609 / 1.386936 (-0.844327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006179 / 0.011353 (-0.005174) | 0.004145 / 0.011008 (-0.006863) | 0.077482 / 0.038508 (0.038974) | 0.028005 / 0.023109 (0.004896) | 0.400010 / 0.275898 (0.124112) | 0.408206 / 0.323480 (0.084726) | 0.005049 / 0.007986 (-0.002937) | 0.003608 / 0.004328 (-0.000721) | 0.076841 / 0.004250 (0.072590) | 0.036714 / 0.037052 (-0.000338) | 0.406020 / 0.258489 (0.147531) | 0.412392 / 0.293841 (0.118551) | 0.025626 / 0.128546 (-0.102920) | 0.008560 / 0.075646 (-0.067087) | 0.084088 / 0.419271 (-0.335183) | 0.039707 / 0.043533 (-0.003826) | 0.396909 / 0.255139 (0.141770) | 0.403623 / 0.283200 (0.120424) | 0.095137 / 0.141683 (-0.046546) | 1.515670 / 1.452155 (0.063515) | 1.568379 / 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181802 / 0.018006 (0.163795) | 0.408778 / 0.000490 (0.408289) | 0.000393 / 0.000200 (0.000193) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025940 / 0.037411 (-0.011471) | 0.099992 / 0.014526 (0.085466) | 0.106280 / 0.176557 (-0.070276) | 0.161729 / 0.737135 (-0.575406) | 0.108625 / 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459802 / 0.215209 (0.244593) | 4.603002 / 2.077655 (2.525347) | 2.406851 / 1.504120 (0.902732) | 2.265422 / 1.541195 (0.724227) | 2.306305 / 1.468490 (0.837815) | 0.553903 / 4.584777 (-4.030874) | 3.482052 / 3.745712 (-0.263660) | 2.969855 / 5.269862 (-2.300007) | 1.309285 / 4.565676 (-3.256391) | 0.068130 / 0.424275 (-0.356145) | 0.012189 / 0.007607 (0.004582) | 0.571299 / 0.226044 (0.345254) | 5.711420 / 2.268929 (3.442492) | 2.716748 / 55.444624 (-52.727876) | 2.369869 / 6.876477 (-4.506608) | 2.544240 / 2.142072 (0.402167) | 0.659955 / 4.805227 (-4.145272) | 0.136684 / 6.500664 (-6.363980) | 0.068962 / 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297659 / 1.841788 (-0.544129) | 14.012758 / 8.074308 (5.938449) | 14.324644 / 10.191392 (4.133252) | 0.144894 / 0.680424 (-0.535530) | 0.016751 / 0.534201 (-0.517450) | 0.361547 / 0.579283 (-0.217736) | 0.396595 / 0.434364 (-0.037769) | 0.422375 / 0.540337 (-0.117962) | 0.508209 / 1.386936 (-0.878727) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006303 / 0.011353 (-0.005050) | 0.004043 / 0.011008 (-0.006965) | 0.096239 / 0.038508 (0.057731) | 0.029608 / 0.023109 (0.006498) | 0.321058 / 0.275898 (0.045160) | 0.367066 / 0.323480 (0.043587) | 0.005236 / 0.007986 (-0.002749) | 0.003342 / 0.004328 (-0.000987) | 0.074407 / 0.004250 (0.070157) | 0.038810 / 0.037052 (0.001757) | 0.332597 / 0.258489 (0.074108) | 0.363562 / 0.293841 (0.069721) | 0.025460 / 0.128546 (-0.103086) | 0.008426 / 0.075646 (-0.067221) | 0.316998 / 0.419271 (-0.102273) | 0.043621 / 0.043533 (0.000088) | 0.338043 / 0.255139 (0.082904) | 0.366441 / 0.283200 (0.083241) | 0.092061 / 0.141683 (-0.049622) | 1.461531 / 1.452155 (0.009376) | 1.538047 / 1.492716 (0.045331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206796 / 0.018006 (0.188790) | 0.517959 / 0.000490 (0.517469) | 0.002745 / 0.000200 (0.002545) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022902 / 0.037411 (-0.014510) | 0.097901 / 0.014526 (0.083375) | 0.103664 / 0.176557 (-0.072893) | 0.163516 / 0.737135 (-0.573619) | 0.108561 / 0.296338 (-0.187778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418964 / 0.215209 (0.203755) | 4.159113 / 2.077655 (2.081458) | 1.843946 / 1.504120 (0.339827) | 1.641083 / 1.541195 (0.099888) | 1.686848 / 1.468490 (0.218358) | 0.554583 / 4.584777 (-4.030194) | 3.409862 / 3.745712 (-0.335850) | 2.647904 / 5.269862 (-2.621958) | 1.355424 / 4.565676 (-3.210253) | 0.068229 / 0.424275 (-0.356046) | 0.012217 / 0.007607 (0.004610) | 0.515895 / 0.226044 (0.289851) | 5.144920 / 2.268929 (2.875991) | 2.298046 / 55.444624 (-53.146579) | 1.964735 / 6.876477 (-4.911741) | 2.075580 / 2.142072 (-0.066492) | 0.657104 / 4.805227 (-4.148123) | 0.134759 / 6.500664 (-6.365905) | 0.067545 / 0.075469 (-0.007924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233075 / 1.841788 (-0.608713) | 13.896762 / 8.074308 (5.822454) | 14.055143 / 10.191392 (3.863751) | 0.145507 / 0.680424 (-0.534917) | 0.016702 / 0.534201 (-0.517499) | 0.365157 / 0.579283 (-0.214126) | 0.385842 / 0.434364 (-0.048522) | 0.459993 / 0.540337 (-0.080344) | 0.547115 / 1.386936 (-0.839821) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.004191 / 0.011008 (-0.006817) | 0.078311 / 0.038508 (0.039803) | 0.028038 / 0.023109 (0.004928) | 0.360056 / 0.275898 (0.084158) | 0.398081 / 0.323480 (0.074602) | 0.005069 / 0.007986 (-0.002916) | 0.003464 / 0.004328 (-0.000864) | 0.077858 / 0.004250 (0.073608) | 0.039420 / 0.037052 (0.002367) | 0.361743 / 0.258489 (0.103254) | 0.404829 / 0.293841 (0.110988) | 0.025604 / 0.128546 (-0.102943) | 0.008573 / 0.075646 (-0.067074) | 0.084944 / 0.419271 (-0.334328) | 0.042652 / 0.043533 (-0.000881) | 0.368549 / 0.255139 (0.113410) | 0.385682 / 0.283200 (0.102482) | 0.099085 / 0.141683 (-0.042598) | 1.495815 / 1.452155 (0.043661) | 1.548168 / 1.492716 (0.055452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193737 / 0.018006 (0.175730) | 0.421871 / 0.000490 (0.421381) | 0.002306 / 0.000200 (0.002106) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025928 / 0.037411 (-0.011483) | 0.103410 / 0.014526 (0.088885) | 0.107931 / 0.176557 (-0.068626) | 0.157127 / 0.737135 (-0.580008) | 0.111892 / 0.296338 (-0.184446) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477562 / 0.215209 (0.262353) | 4.772711 / 2.077655 (2.695056) | 2.458725 / 1.504120 (0.954605) | 2.269871 / 1.541195 (0.728676) | 2.365502 / 1.468490 (0.897012) | 0.556182 / 4.584777 (-4.028595) | 3.408016 / 3.745712 (-0.337697) | 1.730639 / 5.269862 (-3.539222) | 1.000973 / 4.565676 (-3.564704) | 0.068293 / 0.424275 (-0.355982) | 0.012119 / 0.007607 (0.004512) | 0.581281 / 0.226044 (0.355236) | 5.811930 / 2.268929 (3.543001) | 2.890337 / 55.444624 (-52.554288) | 2.592156 / 6.876477 (-4.284321) | 2.687764 / 2.142072 (0.545691) | 0.664282 / 4.805227 (-4.140946) | 0.136029 / 6.500664 (-6.364635) | 0.067493 / 0.075469 (-0.007976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330723 / 1.841788 (-0.511064) | 14.379172 / 8.074308 (6.304864) | 14.153286 / 10.191392 (3.961894) | 0.142942 / 0.680424 (-0.537482) | 0.016698 / 0.534201 (-0.517503) | 0.361044 / 0.579283 (-0.218239) | 0.393174 / 0.434364 (-0.041190) | 0.423107 / 0.540337 (-0.117231) | 0.514299 / 1.386936 (-0.872637) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-25T16:26:33Z
| 2023-05-26T12:22:04Z
| 2023-05-26T11:57:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"merged_at": "2023-05-26T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897"
}
|
Fix cast on sliced `FixedSizeListArray`s.
Fix #5866
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5897/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4276
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4276/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4276/events
|
https://github.com/huggingface/datasets/issues/4276
| 1,224,949,252
|
I_kwDODunzps5JAz4E
| 4,276
|
OpenBookQA has missing and inconsistent field names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vblagoje",
"id": 458335,
"login": "vblagoje",
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vblagoje"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ",
"Ok, awesome @albertvillanova How about #4275 ?",
"On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.",
"@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ",
"I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ",
"IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).",
"I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ",
"I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some kind of standardization/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.",
"@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. ",
"Datasets are not tracked in this repository anymore. I think we must move this thread to the [discussions tab of the dataset](https://huggingface.co/datasets/openbookqa/discussions)",
"Indeed @osbm thanks. I'm closing this issue if it's fine for you all then"
] | 2022-05-04T05:51:52Z
| 2022-10-11T17:11:53Z
| 2022-10-05T13:50:03Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanScore'],
- 'clarity': row['clarity'],
- 'turkIdAnonymized': row['turkIdAnonymized']
3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Expected results
The structure and every data item in the original OpenBookQA matches our OpenBookQA version.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4276/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1555/events
|
https://github.com/huggingface/datasets/pull/1555
| 765,681,607
|
MDExOlB1bGxSZXF1ZXN0NTM5MDMzMzIw
| 1,555
|
Added Opus TedTalks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4",
"events_url": "https://api.github.com/users/rkc007/events{/privacy}",
"followers_url": "https://api.github.com/users/rkc007/followers",
"following_url": "https://api.github.com/users/rkc007/following{/other_user}",
"gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rkc007",
"id": 22396042,
"login": "rkc007",
"node_id": "MDQ6VXNlcjIyMzk2MDQy",
"organizations_url": "https://api.github.com/users/rkc007/orgs",
"received_events_url": "https://api.github.com/users/rkc007/received_events",
"repos_url": "https://api.github.com/users/rkc007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkc007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rkc007"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.",
"merging since the CI is fixed on master"
] | 2020-12-13T22:29:33Z
| 2020-12-18T09:44:43Z
| 2020-12-18T09:44:43Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1555.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1555",
"merged_at": "2020-12-18T09:44:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1555.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1555"
}
|
Dataset : http://opus.nlpl.eu/TedTalks.php
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1555/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1555/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/286
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/286/comments
|
https://api.github.com/repos/huggingface/datasets/issues/286/events
|
https://github.com/huggingface/datasets/pull/286
| 641,585,758
|
MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4
| 286
|
Add ANLI dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/easonnie",
"id": 11016329,
"login": "easonnie",
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"repos_url": "https://api.github.com/users/easonnie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/easonnie"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu"
] | 2020-06-18T22:27:30Z
| 2020-06-22T12:23:27Z
| 2020-06-22T12:23:27Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/286",
"merged_at": "2020-06-22T12:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/286"
}
|
I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/286/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/286/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1102
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1102/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1102/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1102/events
|
https://github.com/huggingface/datasets/issues/1102
| 757,016,515
|
MDU6SXNzdWU3NTcwMTY1MTU=
| 1,102
|
Add retries to download manager
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
] | null |
[] | 2020-12-04T11:08:11Z
| 2020-12-22T15:34:06Z
| 2020-12-22T15:34:06Z
|
MEMBER
| null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1102/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1102/timeline
| null |
completed
| false
|
|
https://api.github.com/repos/huggingface/datasets/issues/2516
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2516/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2516/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2516/events
|
https://github.com/huggingface/datasets/issues/2516
| 924,597,470
|
MDU6SXNzdWU5MjQ1OTc0NzA=
| 2,516
|
datasets.map pickle issue resulting in invalid mapping function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?",
"@lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get\r\n\r\n> Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized\r\n\r\nSo I remove the Custom PreTokenizer in `__getstate__` and then restore it in `__setstate__` (since it doesn't contain any state). This is what my `__getstate__` / `__setstate__` looks like:\r\n\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = self.__dict__.copy()\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n\r\n def __setstate__(self, d):\r\n \"\"\"\r\n Reinstates pre_tokenizer\r\n \"\"\"\r\n logger.debug(\"Reattaching pre_tokenizer\")\r\n self.__dict__ = d\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\nIf this is the case can you think of another way of avoiding my issue?",
"Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend.",
"Doing a deep copy results in the warning:\r\n\r\n> 06/20/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'=<function tokenize_function at 0x7f1e95f05d40> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n\r\n\r\n```\r\ndef __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = copy.deepcopy(self.__dict__)\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n```",
"Looks like there is still an object that is not pickable in your `tokenize_function` function.\r\n\r\nYou can test if an object can be pickled and hashed by using \r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nHasher.hash(my_object)\r\n```\r\n\r\nUnder the hood it pickles the object to compute its hash, so it calls `__getstate__` when applicable.",
"I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error.\r\n\r\nI'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I removed the pre-tokenizers, did a deep copy and then re-generated it.\r\n\r\nIt does work - although I noticed Hasher doesn't call `__hash__` if the object being hashed implements it which I feel it should? If it did I could return a hash of the tokenizers.json file instead.\r\n\r\n```\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n self.backend_tokenizer.pre_tokenizer = tokenizers.pre_tokenizers.Sequence([])\r\n out = copy.deepcopy(self.__dict__) #self.__dict__.copy()\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\n return out\r\n```\r\n",
"I'm glad you figured something out :)\r\n\r\nRegarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes across sessions for the caching to work properly."
] | 2021-06-18T06:47:26Z
| 2021-06-23T13:47:49Z
| null |
NONE
| null | null | null |
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts.
The following reproduces the issue - most likely I'm missing something
A simulated tokeniser which can be pickled
```
class CustomTokenizer:
def __init__(self):
self.state = "init"
def __getstate__(self):
print("__getstate__ called")
out = self.__dict__.copy()
self.state = "pickled"
return out
def __setstate__(self, d):
print("__setstate__ called")
self.__dict__ = d
self.state = "restored"
tokenizer = CustomTokenizer()
```
Test that it actually works - prints "__getstate__ called" and "__setstate__ called"
```
import pickle
serialized = pickle.dumps(tokenizer)
restored = pickle.loads(serialized)
assert restored.state == "restored"
```
Simulate a function that tokenises examples, when dataset.map is called, this function
```
def tokenize_function(examples):
assert tokenizer.state == "restored" # this shouldn't fail but it does
output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer
return output
```
Use map to simulate tokenization
```
import glob
from datasets import load_dataset
assert tokenizer.state == "restored"
train_files = glob.glob('train*.csv')
validation_files = glob.glob('validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
)
```
What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well?
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-22-a2aef4f74aaa> in <module>
8 tokenized_datasets = datasets.map(
9 tokenize_function,
---> 10 batched=True,
11 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1633 fn_kwargs=fn_kwargs,
1634 new_fingerprint=new_fingerprint,
-> 1635 desc=desc,
1636 )
1637 else:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
184 }
185 # apply actual function
--> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
188 # re-apply format to the output
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1961 indices,
1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 1963 offset=offset,
1964 )
1965 except NumExamplesMismatch:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1854 processed_inputs = (
-> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1856 )
1857 if update_data is None:
<ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples)
1 def tokenize_function(examples):
----> 2 assert tokenizer.state == "restored"
3 tokenizer(examples)
4 return examples
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2516/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2516/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3544
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3544/events
|
https://github.com/huggingface/datasets/issues/3544
| 1,095,784,681
|
I_kwDODunzps5BUFjp
| 3,544
|
Ability to split a dataset in multiple files.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2022-01-06T23:02:25Z
| 2022-01-06T23:02:25Z
| null |
CONTRIBUTOR
| null | null | null |
Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4364
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4364/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4364/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4364/events
|
https://github.com/huggingface/datasets/pull/4364
| 1,238,976,106
|
PR_kwDODunzps43-bmq
| 4,364
|
Support complex feature types as `features` in packaged loaders
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-17T17:53:23Z
| 2022-05-31T12:26:23Z
| 2022-05-31T12:16:32Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4364",
"merged_at": "2022-05-31T12:16:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4364"
}
|
This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range.
Fix https://github.com/huggingface/datasets/issues/4210
This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2
TODO:
* [x] tests
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4364/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4364/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2748/events
|
https://github.com/huggingface/datasets/pull/2748
| 958,889,041
|
MDExOlB1bGxSZXF1ZXN0NzAyMDg4NTk4
| 2,748
|
Generate metadata JSON for wikihow dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-03T08:55:40Z
| 2021-08-03T10:17:51Z
| 2021-08-03T10:17:51Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2748",
"merged_at": "2021-08-03T10:17:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2748"
}
|
Related to #2743.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2748/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2748/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5776
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5776/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5776/events
|
https://github.com/huggingface/datasets/issues/5776
| 1,677,116,100
|
I_kwDODunzps5j9sLE
| 5,776
|
Use Pandas' `read_json` in the JSON builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[] | 2023-04-20T17:15:49Z
| 2023-04-20T17:15:49Z
| null |
CONTRIBUTOR
| null | null | null |
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725).
In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5776/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6504/events
|
https://github.com/huggingface/datasets/issues/6504
| 2,044,541,154
|
I_kwDODunzps553Tji
| 6,504
|
Error Pushing to Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-12-16T01:05:22Z
| 2023-12-16T06:20:53Z
| 2023-12-16T06:20:53Z
|
NONE
| null | null | null |
### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_dict(dataset_dict)
dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16"))
dataset.push_to_hub("SequenceModel/imagenet_val_256")
```
Error:
```
...
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "<unicode string>", line 8, column 16:
shape: !!python/tuple
^
```
### Expected behavior
Dataset being pushed to hub
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6504/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1786
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1786/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1786/events
|
https://github.com/huggingface/datasets/issues/1786
| 795,462,816
|
MDU6SXNzdWU3OTU0NjI4MTY=
| 1,786
|
How to use split dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78090287?v=4",
"events_url": "https://api.github.com/users/kkhan188/events{/privacy}",
"followers_url": "https://api.github.com/users/kkhan188/followers",
"following_url": "https://api.github.com/users/kkhan188/following{/other_user}",
"gists_url": "https://api.github.com/users/kkhan188/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kkhan188",
"id": 78090287,
"login": "kkhan188",
"node_id": "MDQ6VXNlcjc4MDkwMjg3",
"organizations_url": "https://api.github.com/users/kkhan188/orgs",
"received_events_url": "https://api.github.com/users/kkhan188/received_events",
"repos_url": "https://api.github.com/users/kkhan188/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kkhan188/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkhan188/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kkhan188"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] | null |
[
"By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n \"train\": \"data/lambada/train.txt\",\r\n \"valid\": \"data/lambada/valid.txt\",\r\n \"test\": \"data/lambada/test.txt\",\r\n}\r\nds = load_dataset(\"text\", data_files=data_files)\r\n```",
"Thank you for the quick response! "
] | 2021-01-27T21:37:47Z
| 2021-04-23T15:17:39Z
| 2021-04-23T15:17:39Z
|
NONE
| null | null | null |

Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1786/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1756/events
|
https://github.com/huggingface/datasets/issues/1756
| 790,380,028
|
MDU6SXNzdWU3OTAzODAwMjg=
| 1,756
|
Ccaligned multilingual translation dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-01-20T22:18:44Z
| 2021-03-01T10:36:21Z
| 2021-03-01T10:36:21Z
|
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French).
- **Paper:** *link to the dataset paper if available*
- https://www.aclweb.org/anthology/2020.emnlp-main.480.pdf
- **Data:** *link to the Github repository or current dataset location*
- http://www.statmt.org/cc-aligned/
- **Motivation:** *what are some good reasons to have this dataset*
- The authors says it's an high quality dataset.
- it's pretty large and includes many language pairs. It could be interesting training mt5 on this task.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1756/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2601
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2601/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2601/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2601/events
|
https://github.com/huggingface/datasets/pull/2601
| 938,096,396
|
MDExOlB1bGxSZXF1ZXN0Njg0NTQyNjY5
| 2,601
|
Fix `filter` with multiprocessing in case all samples are discarded
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mxschmdt",
"id": 4904985,
"login": "mxschmdt",
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mxschmdt"
}
|
[] |
closed
| false
| null |
[] |
{
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
}
|
[] | 2021-07-06T17:06:28Z
| 2021-07-12T14:10:35Z
| 2021-07-07T12:50:31Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2601",
"merged_at": "2021-07-07T12:50:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2601"
}
|
Fixes #2600
Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2601/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2601/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6199
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6199/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6199/events
|
https://github.com/huggingface/datasets/issues/6199
| 1,875,165,185
|
I_kwDODunzps5vxMAB
| 6,199
|
Use load_dataset for local json files, but it not works
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4",
"events_url": "https://api.github.com/users/Garen-in-bush/events{/privacy}",
"followers_url": "https://api.github.com/users/Garen-in-bush/followers",
"following_url": "https://api.github.com/users/Garen-in-bush/following{/other_user}",
"gists_url": "https://api.github.com/users/Garen-in-bush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Garen-in-bush",
"id": 50519434,
"login": "Garen-in-bush",
"node_id": "MDQ6VXNlcjUwNTE5NDM0",
"organizations_url": "https://api.github.com/users/Garen-in-bush/orgs",
"received_events_url": "https://api.github.com/users/Garen-in-bush/received_events",
"repos_url": "https://api.github.com/users/Garen-in-bush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Garen-in-bush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Garen-in-bush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Garen-in-bush"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hugging Face's datasets library may prioritize remote configurations. Make sure there are no conflicting configurations causing the library to prefer downloading data\r\nMay be try debugging\r\nraw_datasets = load_dataset('json', data_files=data_files)\r\nprint(raw_datasets)\r\n",
"It doesn't download them but writes them to the local HF cache. The logging could indeed be better. Does loading the dataset succeed? If it doesn't, can you share the error stack trace?"
] | 2023-08-31T09:42:34Z
| 2023-08-31T19:05:07Z
| null |
NONE
| null | null | null |
### Describe the bug
when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset.
### Steps to reproduce the bug
`raw_datasets = load_dataset(
‘json’,
data_files=data_files)`
### Expected behavior

### Environment info
python version 3.8.5
datasets version 2.12
os version unbuntu 18.04
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6199/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6199/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2596/events
|
https://github.com/huggingface/datasets/issues/2596
| 937,598,914
|
MDU6SXNzdWU5Mzc1OTg5MTQ=
| 2,596
|
Transformer Class on dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4",
"events_url": "https://api.github.com/users/arita37/events{/privacy}",
"followers_url": "https://api.github.com/users/arita37/followers",
"following_url": "https://api.github.com/users/arita37/following{/other_user}",
"gists_url": "https://api.github.com/users/arita37/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arita37",
"id": 18707623,
"login": "arita37",
"node_id": "MDQ6VXNlcjE4NzA3NjIz",
"organizations_url": "https://api.github.com/users/arita37/orgs",
"received_events_url": "https://api.github.com/users/arita37/received_events",
"repos_url": "https://api.github.com/users/arita37/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arita37/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arita37"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! Do you have an example in mind that shows how this could be useful ?",
"Example:\n\nMerge 2 datasets into one datasets\n\nLabel extraction from dataset\n\ndataset(text, label)\n —> dataset(text, newlabel)\n\nTextCleaning.\n\n\nFor image dataset, \nTransformation are easier (ie linear algebra).\n\n\n\n\n\n\n> On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> Hi ! Do you have an example in mind that shows how this could be useful ?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n",
"There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.\r\nYou can find examples in the documentation here:\r\nhttps://huggingface.co/docs/datasets/processing.html\r\n\r\nYou can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for example",
"Ok, sure.\n\nThanks for pointing on functional part.\nMy question is more\n“Philosophical”/Design perspective.\n\nThere are 2 perspetive:\n Add transformation methods to \n Dataset Class\n\n\n OR Create a Transformer Class\n which operates on Dataset Class.\n\nT(Dataset) —> Dataset\n\ndatasetnew = MyTransform.transform(dataset)\ndatasetNew.save(path)\n\n\nWhat would be the difficulty\nof implementing a Transformer Class\noperating at dataset level ?\n\n\nthanks\n\n\n\n\n\n\n\n\n\n> On Jul 6, 2021, at 22:00, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> There are already a few transformations that you can apply on a dataset using methods like dataset.map().\n> You can find examples in the documentation here:\n> https://huggingface.co/docs/datasets/processing.html\n> \n> You can merge two datasets with concatenate_datasets() or do label extraction with dataset.map() for example\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n",
"I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\r\n\r\nI guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)",
"Thanks for reply.\n\nWhat would be the constraints\nto have\nDataset —> Dataset consistency ?\n\nMain issue would be\nlarger than memory dataset and\nserialization on disk.\n\nTechnically,\none still process at atomic level\nand try to wrap the full results\ninto Dataset…. (!)\n\nWhat would you think ?\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 16:51, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\n> \n> I guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n",
"We can be pretty flexible and not impose any constraints for transforms.\r\n\r\nMoreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM. So this shouldn't be an issue",
"Ok thanks.\n\nBut, Dataset has various flavors.\nIn current design of Dataset,\n how the serialization on disk is done (?)\n\n\nThe main issue is serialization \nof newdataset= Transform(Dataset)\n (ie thats why am referring to Out Of memory dataset…):\n\n Should be part of Transform or part of dataset ?\n\n\n\n\nMaybe, not, since the output is aimed to feed model in memory (?)\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 18:04, Quentin Lhoest ***@***.***> wrote:\n> \n> \n> We can be pretty flexible and not impose any constraints for transforms.\n> \n> Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like map work in a batched fashion to not fill up your RAM. So this shouldn't be an issue\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n",
"I'm not sure I understand, could you elaborate a bit more please ?\r\n\r\nEach dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk.\r\nWe have an ArrowWriter and ArrowReader class to write/read arrow tables on disk or in in-memory buffers."
] | 2021-07-06T07:27:15Z
| 2022-11-02T14:26:09Z
| 2022-11-02T14:26:09Z
|
NONE
| null | null | null |
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2596/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2596/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3590
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3590/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3590/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3590/events
|
https://github.com/huggingface/datasets/pull/3590
| 1,106,784,860
|
PR_kwDODunzps4xMlGg
| 3,590
|
Update ANLI README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borgr",
"id": 6416600,
"login": "borgr",
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"repos_url": "https://api.github.com/users/borgr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borgr"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-01-18T11:22:53Z
| 2022-01-20T16:58:41Z
| 2022-01-20T16:58:41Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3590",
"merged_at": "2022-01-20T16:58:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3590"
}
|
Update license and little things concerning ANLI
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3590/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3590/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3613
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3613/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3613/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3613/events
|
https://github.com/huggingface/datasets/issues/3613
| 1,110,684,015
|
I_kwDODunzps5CM7Fv
| 3,613
|
Files not updating in dataset viewer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abidlabs",
"id": 1778297,
"login": "abidlabs",
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abidlabs"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.",
"Should have been fixed now."
] | 2022-01-21T16:47:20Z
| 2022-01-22T08:13:13Z
| 2022-01-22T08:13:13Z
|
MEMBER
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:**
Some examples:
* https://huggingface.co/datasets/abidlabs/crowdsourced-speech4
* https://huggingface.co/datasets/abidlabs/test-audio-13
*short description of the issue*
It seems that the dataset viewer is reading a cached version of the dataset and it is not updating to reflect new files that are added to the dataset. I get this error:

Am I the one who added this dataset? Yes
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3613/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3613/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4685
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4685/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4685/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4685/events
|
https://github.com/huggingface/datasets/pull/4685
| 1,305,861,708
|
PR_kwDODunzps47dju8
| 4,685
|
Fix mock fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T10:23:12Z
| 2022-07-15T13:05:03Z
| 2022-07-15T12:52:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4685",
"merged_at": "2022-07-15T12:52:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4685"
}
|
This PR:
- Removes an unused method from `DummyTestFS`
- Refactors `mock_fsspec` to make it simpler
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4685/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4685/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/335
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/335/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/335/comments
|
https://api.github.com/repos/huggingface/datasets/issues/335/events
|
https://github.com/huggingface/datasets/pull/335
| 649,765,179
|
MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1
| 335
|
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15162021?v=4",
"events_url": "https://api.github.com/users/PetrosStav/events{/privacy}",
"followers_url": "https://api.github.com/users/PetrosStav/followers",
"following_url": "https://api.github.com/users/PetrosStav/following{/other_user}",
"gists_url": "https://api.github.com/users/PetrosStav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PetrosStav",
"id": 15162021,
"login": "PetrosStav",
"node_id": "MDQ6VXNlcjE1MTYyMDIx",
"organizations_url": "https://api.github.com/users/PetrosStav/orgs",
"received_events_url": "https://api.github.com/users/PetrosStav/received_events",
"repos_url": "https://api.github.com/users/PetrosStav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PetrosStav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PetrosStav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PetrosStav"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-)",
"```\r\n=================================== FAILURES ===================================\r\n___________________ AWSDatasetTest.test_load_dataset_pandas ____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_pandas>\r\ndataset_name = 'pandas'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.pandas.91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926.pandas.Pandas object at 0x7f3b84f655c0>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b84f3d320>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py:23: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893169180856 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpwmbk8e8d\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893169180856 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610536912 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610536912 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893610533608 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmp00hpyxrs\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py in cache at /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893610533608 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610371224 acquired on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/pandas/pandas.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/pandas/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610371224 released on /home/circleci/.cache/huggingface/datasets/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset pandas (/tmp/tmp296h8eeg/pandas/default/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7f3b6a111550>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b85582908>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893159303656 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpk63omy4v\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893159303656 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893159171352 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json\r\nINFO filelock:filelock.py:318 Lock 139893159171352 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893618479176 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py not found in cache or force_download set to True, downloading to /home/circleci/.cache/huggingface/datasets/tmpkeykru_f\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py in cache at /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893618479176 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893618423848 acquired on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py to /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/text/text.py at /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.json\r\nINFO filelock:filelock.py:318 Lock 139893618423848 released on /home/circleci/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset text (/tmp/tmpbu67mvue/text/default/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n=============================== warnings summary ===============================\r\n/home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15\r\n /home/circleci/.local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\ntests/test_dataset_common.py::LocalDatasetTest::test_builder_class_tydiqa\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/tydiqa/42d88245bde7c0db6c0d48c822dcaa26c7299e0b40cace7e8d6a9e3628135125/tydiqa.py:85: DeprecationWarning: invalid escape sequence \\G\r\n \"\"\"\r\n\r\ntests/test_dataset_common.py::AWSDatasetTest::test_builder_class_mwsc\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/mwsc/53c0daac11b6794ff62b52a3a46c4f9da1bef68fd664a2f97b8918917aead715/mwsc.py:70: DeprecationWarning: invalid escape sequence \\[\r\n pattern = \"\\[.*\\]\"\r\n\r\ntests/test_dataset_common.py::AWSDatasetTest::test_builder_class_squadshifts\r\n /home/circleci/.local/lib/python3.6/site-packages/nlp/datasets/squadshifts/15536d7296a785325b99f6d84dfdceafa427419dd6caad110eabb5e5b4156cc2/squadshifts.py:47: DeprecationWarning: invalid escape sequence \\ \r\n \"\"\"\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_pandas\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n===== 2 failed, 934 passed, 516 skipped, 4 warnings in 1562.46s (0:26:02) ======\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nI get this failed test on CircleCI , but all the tests that I run locally where successful. The error also seems not to have any, obvious at least, connection with my code.\r\n\r\nAny suggestions? Thanks! :-) "
] | 2020-07-02T09:03:41Z
| 2020-07-15T08:02:07Z
| 2020-07-15T08:02:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/335",
"merged_at": "2020-07-15T08:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/335"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/335/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/335/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/3022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3022/events
|
https://github.com/huggingface/datasets/pull/3022
| 1,015,750,221
|
PR_kwDODunzps4sqve6
| 3,022
|
MeDAL dataset: Add further description and update download URL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xhluca",
"id": 21180505,
"login": "xhluca",
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"repos_url": "https://api.github.com/users/xhluca/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xhluca"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?",
"I just added the missing `pretty_name` tag in the YAML - sorry about that ;)",
"Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I can do in this PR!",
"It's all good thank you :)\r\n\r\nmerging !"
] | 2021-10-05T00:13:28Z
| 2021-10-13T09:03:09Z
| 2021-10-13T09:03:09Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3022",
"merged_at": "2021-10-13T09:03:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3022"
}
|
Added more details in the following sections:
* Dataset Structure
* Data Instances
* Data Splits
* Source Data
* Annotations
* Discussions of Biases
* LIcensing Information
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3022/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/323
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/323/comments
|
https://api.github.com/repos/huggingface/datasets/issues/323/events
|
https://github.com/huggingface/datasets/pull/323
| 647,521,308
|
MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3
| 323
|
Add package path to sys when downloading package as github archive
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ",
" I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'\r\nWe could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH`"
] | 2020-06-29T16:46:01Z
| 2020-07-30T14:00:23Z
| 2020-07-30T14:00:23Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323"
}
|
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.
This PR fixes https://github.com/huggingface/nlp/issues/305
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/323/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1788
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1788/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1788/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1788/events
|
https://github.com/huggingface/datasets/pull/1788
| 795,544,422
|
MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2
| 1,788
|
Doc2dial rc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-01-27T23:51:00Z
| 2021-01-28T18:46:13Z
| 2021-01-28T18:46:13Z
|
CONTRIBUTOR
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1788.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1788",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1788.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1788"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1788/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1788/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.