url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 1.98B 2.03B | node_id stringlengths 18 19 | number int64 6.38k 6.48k | title stringlengths 14 119 | user dict | labels listlengths 0 2 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 1 | milestone dict | comments listlengths 0 24 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3
values | active_lock_reason float64 | body stringlengths 9 19.4k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | draft float64 0 1 ⌀ | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6483/comments | https://api.github.com/repos/huggingface/datasets/issues/6483/events | https://github.com/huggingface/datasets/issues/6483 | 2,032,946,981 | I_kwDODunzps55LE8l | 6,483 | Iterable Dataset: rename column clashes with remove column | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
... | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\... | 2023-12-08T16:11:30Z | 2023-12-08T16:27:16Z | 2023-12-08T16:27:04Z | CONTRIBUTOR | null | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typic... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6483/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | null | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fi... | 2023-12-08T16:01:35Z | 2023-12-08T21:29:30Z | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6482/comments | https://api.github.com/repos/huggingface/datasets/issues/6482/events | https://github.com/huggingface/datasets/pull/6482 | 2,032,675,918 | PR_kwDODunzps5hhl23 | 6,482 | Fix max lock length on unix | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-08T13:39:30Z | 2023-12-08T18:21:32Z | null | MEMBER | null | reported in https://github.com/huggingface/datasets/pull/6482 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6482/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6482",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6482"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6481/comments | https://api.github.com/repos/huggingface/datasets/issues/6481/events | https://github.com/huggingface/datasets/issues/6481 | 2,032,650,003 | I_kwDODunzps55J8cT | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | {
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [] | 2023-12-08T13:22:03Z | 2023-12-08T13:22:03Z | null | NONE | null | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reac... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6481/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6480/comments | https://api.github.com/repos/huggingface/datasets/issues/6480/events | https://github.com/huggingface/datasets/pull/6480 | 2,031,116,653 | PR_kwDODunzps5hcS7P | 6,480 | Add IterableDataset `__repr__` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | 2023-12-07T16:31:50Z | 2023-12-08T13:33:06Z | 2023-12-08T13:26:54Z | MEMBER | null | Example for glue sst2:
Dataset
```
DatasetDict({
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence',... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6480/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6480",
"merged_at": "2023-12-08T13:26:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6479/comments | https://api.github.com/repos/huggingface/datasets/issues/6479/events | https://github.com/huggingface/datasets/pull/6479 | 2,029,040,121 | PR_kwDODunzps5hVLom | 6,479 | More robust preupload retry mechanism | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6479). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | 2023-12-06T17:19:38Z | 2023-12-06T19:47:29Z | 2023-12-06T19:41:06Z | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6479/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6479.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6479",
"merged_at": "2023-12-06T19:41:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6479.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6478/comments | https://api.github.com/repos/huggingface/datasets/issues/6478/events | https://github.com/huggingface/datasets/issues/6478 | 2,028,071,596 | I_kwDODunzps544eqs | 6,478 | How to load data from lakefs | {
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [
"You can create a `pandas` DataFrame following [this](https://lakefs.io/data-version-control/dvc-using-python/) tutorial, and then convert this DataFrame to a `Dataset` with `datasets.Dataset.from_pandas`. For larger datasets (to memory map them), you can use `Dataset.from_generator` with a generator function that ... | 2023-12-06T09:04:11Z | 2023-12-07T02:19:44Z | null | NONE | null | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6478/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6477/comments | https://api.github.com/repos/huggingface/datasets/issues/6477/events | https://github.com/huggingface/datasets/pull/6477 | 2,028,022,374 | PR_kwDODunzps5hRq_N | 6,477 | Fix PermissionError on Windows CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | 2023-12-06T08:34:53Z | 2023-12-06T09:24:11Z | 2023-12-06T09:17:52Z | MEMBER | null | Fix #6476. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6477/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6477.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6477",
"merged_at": "2023-12-06T09:17:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6477.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6476/comments | https://api.github.com/repos/huggingface/datasets/issues/6476/events | https://github.com/huggingface/datasets/issues/6476 | 2,028,018,596 | I_kwDODunzps544Ruk | 6,476 | CI on windows is broken: PermissionError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2023-12-06T08:32:53Z | 2023-12-06T09:17:53Z | 2023-12-06T09:17:53Z | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394
```
FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6476/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6475/comments | https://api.github.com/repos/huggingface/datasets/issues/6475/events | https://github.com/huggingface/datasets/issues/6475 | 2,027,373,734 | I_kwDODunzps5410Sm | 6,475 | laion2B-en failed to load on Windows with PrefetchVirtualMemory failed | {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
... | [] | open | false | null | [] | null | [
"~~You will see this error if the cache dir filepath contains relative `..` paths. Use `os.path.realpath(_CACHE_DIR)` before passing it to the `load_dataset` function.~~",
"This is a real issue and not related to paths.",
"Based on the StackOverflow answer, this causes the error to go away:\r\n```diff\r\ndiff -... | 2023-12-06T00:07:34Z | 2023-12-06T23:26:23Z | null | NONE | null | ### Describe the bug
I have downloaded laion2B-en, and I'm receiving the following error trying to load it:
```
Resolving data files: 100%|██████████| 128/128 [00:00<00:00, 1173.79it/s]
Traceback (most recent call last):
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <mo... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6475/timeline | null | reopened | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6474/comments | https://api.github.com/repos/huggingface/datasets/issues/6474/events | https://github.com/huggingface/datasets/pull/6474 | 2,027,006,715 | PR_kwDODunzps5hONZc | 6,474 | Deprecate Beam API and download from HF GCS bucket | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-05T19:51:33Z | 2023-12-08T19:02:47Z | null | CONTRIBUTOR | null | Deprecate the Beam API and download from the HF GCS bucked.
TODO:
- [ ] Deprecate the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) in favor of [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia)
- [ ] Make [`natural_questions`](https://huggingface.co/datasets/natur... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6474/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6474.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6474",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6474.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6474"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6473/comments | https://api.github.com/repos/huggingface/datasets/issues/6473/events | https://github.com/huggingface/datasets/pull/6473 | 2,026,495,084 | PR_kwDODunzps5hMbvz | 6,473 | Fix CI quality | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6473). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-12-05T15:36:23Z | 2023-12-05T18:14:50Z | 2023-12-05T18:08:41Z | MEMBER | null | Fix #6472. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6473/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6473",
"merged_at": "2023-12-05T18:08:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6472/comments | https://api.github.com/repos/huggingface/datasets/issues/6472/events | https://github.com/huggingface/datasets/issues/6472 | 2,026,493,439 | I_kwDODunzps54ydX_ | 6,472 | CI quality is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "d4c5f9",
"default": false,
"descrip... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2023-12-05T15:35:34Z | 2023-12-06T08:17:34Z | 2023-12-05T18:08:43Z | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359
```
Would reformat: src/datasets/features/image.py
1 file would be reformatted, 253 files left unchanged
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6472/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6471/comments | https://api.github.com/repos/huggingface/datasets/issues/6471/events | https://github.com/huggingface/datasets/pull/6471 | 2,026,100,761 | PR_kwDODunzps5hLEni | 6,471 | Remove delete doc CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6471). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-12-05T12:37:50Z | 2023-12-05T12:44:59Z | 2023-12-05T12:38:50Z | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6471/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6471",
"merged_at": "2023-12-05T12:38:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6470/comments | https://api.github.com/repos/huggingface/datasets/issues/6470/events | https://github.com/huggingface/datasets/issues/6470 | 2,024,724,319 | I_kwDODunzps54rtdf | 6,470 | If an image in a dataset is corrupted, we get unescapable error | {
"avatar_url": "https://avatars.githubusercontent.com/u/14337872?v=4",
"events_url": "https://api.github.com/users/chigozienri/events{/privacy}",
"followers_url": "https://api.github.com/users/chigozienri/followers",
"following_url": "https://api.github.com/users/chigozienri/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | [] | 2023-12-04T20:58:49Z | 2023-12-04T20:58:49Z | null | NONE | null | ### Describe the bug
Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1
### Steps to reproduce the bug
```
from datasets import load_dataset, VerificationMode
dataset = load_dataset(
'sasha/birdsnap',
split="train",
verification_mode=VerificationMode.ALL_C... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6470/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6469/comments | https://api.github.com/repos/huggingface/datasets/issues/6469/events | https://github.com/huggingface/datasets/pull/6469 | 2,023,695,839 | PR_kwDODunzps5hC6xf | 6,469 | Don't expand_info in HF glob | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6469). All of your documentation changes will be reflected on that endpoint."
] | 2023-12-04T12:00:37Z | 2023-12-04T12:10:17Z | null | MEMBER | null | Finally fix https://github.com/huggingface/datasets/issues/5537 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6469/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6469/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6469.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6469",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6469.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6469"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6468/comments | https://api.github.com/repos/huggingface/datasets/issues/6468/events | https://github.com/huggingface/datasets/pull/6468 | 2,023,617,877 | PR_kwDODunzps5hCpbN | 6,468 | Use auth to get parquet export | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6468). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-12-04T11:18:27Z | 2023-12-04T17:21:22Z | 2023-12-04T17:15:11Z | MEMBER | null | added `token` to the `_datasets_server` functions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6468/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6468",
"merged_at": "2023-12-04T17:15:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6467/comments | https://api.github.com/repos/huggingface/datasets/issues/6467/events | https://github.com/huggingface/datasets/issues/6467 | 2,023,174,233 | I_kwDODunzps54lzBZ | 6,467 | New version release request | {
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)",
"Thanks!"
] | 2023-12-04T07:08:26Z | 2023-12-04T15:42:22Z | 2023-12-04T15:42:22Z | CONTRIBUTOR | null | ### Feature request
Hi!
I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0.
To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6467/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6466/comments | https://api.github.com/repos/huggingface/datasets/issues/6466/events | https://github.com/huggingface/datasets/issues/6466 | 2,022,601,176 | I_kwDODunzps54jnHY | 6,466 | Can't align optional features of struct | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | null | [] | 2023-12-03T15:57:07Z | 2023-12-03T15:59:03Z | null | CONTRIBUTOR | null | ### Describe the bug
Hello!
I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional.
I have a column named `speaker`, and this holds some information about a speaker.
```python
@dataclass
class Speaker:
name: str
email: Optional[str]
```
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6466/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6465/comments | https://api.github.com/repos/huggingface/datasets/issues/6465/events | https://github.com/huggingface/datasets/issues/6465 | 2,022,212,468 | I_kwDODunzps54iIN0 | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | null | [
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-02T21:35:17Z | 2023-12-04T16:13:10Z | null | NONE | null | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. c... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6465/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6464/comments | https://api.github.com/repos/huggingface/datasets/issues/6464/events | https://github.com/huggingface/datasets/pull/6464 | 2,020,860,462 | PR_kwDODunzps5g5djo | 6,464 | Add concurrent loading of shards to datasets.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [
"If we use multithreading no need to ask for `num_proc`. And maybe we the same numbers of threads as tqdm by default (IIRC it's `max(32, cpu_count() + 4)`) - you can even use `tqdm.contrib.concurrent.thread_map` directly to simplify the code\r\n\r\nAlso you can ignore the `IN_MEMORY_MAX_SIZE` config for this. This ... | 2023-12-01T13:13:53Z | 2023-12-07T12:47:02Z | null | NONE | null | In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads.
- Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252).
- I'... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6464/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6464",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6464"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6463/comments | https://api.github.com/repos/huggingface/datasets/issues/6463/events | https://github.com/huggingface/datasets/pull/6463 | 2,020,702,967 | PR_kwDODunzps5g46_4 | 6,463 | Disable benchmarks in PRs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"It's a way to detect regressions in performance sensitive methods like map, and find the commit that lead to the regression",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_bat... | 2023-12-01T11:35:30Z | 2023-12-01T12:09:09Z | 2023-12-01T12:03:04Z | MEMBER | null | In order to keep PR pages less spammy / more readable.
Having the benchmarks on commits on `main` is enough imo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6463/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6463/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6463.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6463",
"merged_at": "2023-12-01T12:03:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6463.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6462/comments | https://api.github.com/repos/huggingface/datasets/issues/6462/events | https://github.com/huggingface/datasets/pull/6462 | 2,019,238,388 | PR_kwDODunzps5gz68T | 6,462 | Missing DatasetNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-30T18:09:43Z | 2023-11-30T18:36:40Z | 2023-11-30T18:30:30Z | MEMBER | null | continuation of https://github.com/huggingface/datasets/pull/6431
this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6462/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6462",
"merged_at": "2023-11-30T18:30:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6461/comments | https://api.github.com/repos/huggingface/datasets/issues/6461/events | https://github.com/huggingface/datasets/pull/6461 | 2,018,850,731 | PR_kwDODunzps5gykvO | 6,461 | Fix shard retry mechanism in `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"@Wauplin Maybe `504` should be added to the `retry_on_status_codes` tuple [here](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300) to guard against https://github.com/huggingface/datasets/issues/3872",
"We could but I'm not sure to have ... | 2023-11-30T14:57:14Z | 2023-12-01T17:57:39Z | 2023-12-01T17:51:33Z | CONTRIBUTOR | null | When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that.
Fix... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6461/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6461.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6461",
"merged_at": "2023-12-01T17:51:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6461.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6460/comments | https://api.github.com/repos/huggingface/datasets/issues/6460/events | https://github.com/huggingface/datasets/issues/6460 | 2,017,433,899 | I_kwDODunzps54P5kr | 6,460 | jsonlines files don't load with `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"Hi @serenalotreck,\r\n\r\nWe use Apache Arrow `pyarrow` to read jsonlines and it throws an error when trying to load your data files:\r\n```python\r\nIn [1]: import pyarrow as pa\r\n\r\nIn [2]: data = pa.json.read_json(\"train.jsonl\")\r\n---------------------------------------------------------------------------\... | 2023-11-29T21:20:11Z | 2023-12-05T14:02:12Z | 2023-12-05T13:30:53Z | NONE | null | ### Describe the bug
While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`.
### Steps to reproduce the bug
Code:
```
from datasets import load_dataset
dset = load_... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6460/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6459/comments | https://api.github.com/repos/huggingface/datasets/issues/6459/events | https://github.com/huggingface/datasets/pull/6459 | 2,017,029,380 | PR_kwDODunzps5gsWlz | 6,459 | [WIP] Retrieve cached datasets that were pushed to hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-29T16:56:15Z | 2023-12-08T17:02:12Z | null | MEMBER | null | I drafted the logic to retrieve a no-script dataset in the cache.
For example it can reload datasets that were pushed to hub if they exist in the cache.
example:
```python
>>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp")
>>> load_dataset("lhoestq/tmp")
DatasetDict({
train: Dataset({
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6459/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6459",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6459"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6458/comments | https://api.github.com/repos/huggingface/datasets/issues/6458/events | https://github.com/huggingface/datasets/pull/6458 | 2,016,577,761 | PR_kwDODunzps5gqy4M | 6,458 | Lazy data files resolution | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-29T13:18:44Z | 2023-12-08T20:38:23Z | null | MEMBER | null | Related to discussion at https://github.com/huggingface/datasets/pull/6255
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6458/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6458.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6458",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6458.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6458"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6457/comments | https://api.github.com/repos/huggingface/datasets/issues/6457/events | https://github.com/huggingface/datasets/issues/6457 | 2,015,650,563 | I_kwDODunzps54JGMD | 6,457 | `TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth' | {
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"Updating `fsspec>=2023.10.0` did solve the issue.",
"May be it should be pinned somewhere?",
"> Maybe this should go in datasets directly... anyways you can easily fix this error by updating datasets>=2.15.1.dev0.\r\n\r\n@lhoestq @mariosasko for what I understand this is a bug fixed in `datasets` already, righ... | 2023-11-29T01:57:36Z | 2023-11-29T15:39:03Z | 2023-11-29T02:02:38Z | NONE | null | ### Describe the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Steps to reproduce the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Expected behavior
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Environment info
Please s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6457/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6456/comments | https://api.github.com/repos/huggingface/datasets/issues/6456/events | https://github.com/huggingface/datasets/pull/6456 | 2,015,186,090 | PR_kwDODunzps5gmDJY | 6,456 | Don't require trust_remote_code in inspect_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-28T19:47:07Z | 2023-11-30T10:40:23Z | 2023-11-30T10:34:12Z | MEMBER | null | don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose)
(not super important but we might as well keep it until the next major release)
this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6456/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6456",
"merged_at": "2023-11-30T10:34:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6454/comments | https://api.github.com/repos/huggingface/datasets/issues/6454/events | https://github.com/huggingface/datasets/pull/6454 | 2,013,001,584 | PR_kwDODunzps5gej3H | 6,454 | Refactor `dill` logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-27T20:01:25Z | 2023-11-28T16:29:58Z | 2023-11-28T16:29:31Z | CONTRIBUTOR | null | Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way)
It makes the following improvements to the serialization API:
* consistent order of a `dict`'s keys
* support for hashing `torch.compile`-ed modules and functions
* deprecates `datasets.fingerprint.hashregister` as the `ha... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6454/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6454/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6454.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6454",
"merged_at": "2023-11-28T16:29:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6454.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6453/comments | https://api.github.com/repos/huggingface/datasets/issues/6453/events | https://github.com/huggingface/datasets/pull/6453 | 2,011,907,787 | PR_kwDODunzps5ga0rv | 6,453 | Update hub-docs reference | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-27T09:57:20Z | 2023-11-27T10:23:44Z | 2023-11-27T10:17:34Z | CONTRIBUTOR | null | Follow up to huggingface/huggingface.js#296 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6453/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6453.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6453",
"merged_at": "2023-11-27T10:17:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6453.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6452/comments | https://api.github.com/repos/huggingface/datasets/issues/6452/events | https://github.com/huggingface/datasets/pull/6452 | 2,011,632,708 | PR_kwDODunzps5gZ5oe | 6,452 | Praveen_repo_pull_req | {
"avatar_url": "https://avatars.githubusercontent.com/u/151713216?v=4",
"events_url": "https://api.github.com/users/Praveenhh/events{/privacy}",
"followers_url": "https://api.github.com/users/Praveenhh/followers",
"following_url": "https://api.github.com/users/Praveenhh/following{/other_user}",
"gists_url": ... | [] | closed | false | null | [] | null | [] | 2023-11-27T07:07:50Z | 2023-11-27T09:28:00Z | 2023-11-27T09:28:00Z | NONE | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6452/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6452",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6452"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6451/comments | https://api.github.com/repos/huggingface/datasets/issues/6451/events | https://github.com/huggingface/datasets/issues/6451 | 2,010,693,912 | I_kwDODunzps532MEY | 6,451 | Unable to read "marsyas/gtzan" data | {
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.",
"I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.",
"@mariosasko @albertv... | 2023-11-25T15:13:17Z | 2023-12-01T12:53:46Z | 2023-11-27T09:36:25Z | NONE | null | Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6451/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6450/comments | https://api.github.com/repos/huggingface/datasets/issues/6450/events | https://github.com/huggingface/datasets/issues/6450 | 2,009,491,386 | I_kwDODunzps53xme6 | 6,450 | Support multiple image/audio columns in ImageFolder/AudioFolder | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
... | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5760"
] | 2023-11-24T10:34:09Z | 2023-11-28T11:07:17Z | 2023-11-24T17:24:38Z | CONTRIBUTOR | null | ### Feature request
Have a metadata.csv file with multiple columns that point to relative image or audio files.
### Motivation
Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative aud... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6450/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6449/comments | https://api.github.com/repos/huggingface/datasets/issues/6449/events | https://github.com/huggingface/datasets/pull/6449 | 2,008,617,992 | PR_kwDODunzps5gQCVZ | 6,449 | Fix metadata file resolution when inferred pattern is `**` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-23T17:35:02Z | 2023-11-27T10:02:56Z | 2023-11-24T17:13:02Z | CONTRIBUTOR | null | Refetch metadata files in case they were dropped by `filter_extensions` in the previous step.
Fix #6442
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6449/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6449/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6449.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6449",
"merged_at": "2023-11-24T17:13:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6449.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6448/comments | https://api.github.com/repos/huggingface/datasets/issues/6448/events | https://github.com/huggingface/datasets/pull/6448 | 2,008,614,985 | PR_kwDODunzps5gQBsE | 6,448 | Use parquet export if possible | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-23T17:31:57Z | 2023-12-01T17:57:17Z | 2023-12-01T17:50:59Z | MEMBER | null | The idea is to make this code work for datasets with scripts if they have a Parquet export
```python
ds = load_dataset("squad", trust_remote_code=False)
```
And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts).
I also added a `config.USE_P... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6448/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6448.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6448",
"merged_at": "2023-12-01T17:50:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6448.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6447/comments | https://api.github.com/repos/huggingface/datasets/issues/6447/events | https://github.com/huggingface/datasets/issues/6447 | 2,008,195,298 | I_kwDODunzps53sqDi | 6,447 | Support one dataset loader per config when using YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2023-11-23T13:03:07Z | 2023-11-23T13:03:07Z | null | CONTRIBUTOR | null | ### Feature request
See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1
I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc.
### Motivation
It would be more flexible for the users
### Your contribution
No specific contribution | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6447/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6446/comments | https://api.github.com/repos/huggingface/datasets/issues/6446/events | https://github.com/huggingface/datasets/issues/6446 | 2,007,092,708 | I_kwDODunzps53oc3k | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | {
"avatar_url": "https://avatars.githubusercontent.com/u/18024303?v=4",
"events_url": "https://api.github.com/users/vymao/events{/privacy}",
"followers_url": "https://api.github.com/users/vymao/followers",
"following_url": "https://api.github.com/users/vymao/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"You can use `.align_labels_with_mapping` on the dataset to align the labels with the model config.\r\n\r\nRegarding the number of labels, only the special `_silence_` label corresponding to noise is missing, which is consistent with the model paper (reports training on 35 labels). You can run a `.filter` to drop ... | 2023-11-22T20:46:36Z | 2023-11-28T14:46:08Z | 2023-11-28T14:46:08Z | NONE | null | ### Describe the bug
[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6446/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6445/comments | https://api.github.com/repos/huggingface/datasets/issues/6445/events | https://github.com/huggingface/datasets/pull/6445 | 2,006,958,595 | PR_kwDODunzps5gKg2d | 6,445 | Use `filelock` package for file locking | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-22T19:04:45Z | 2023-11-23T18:47:30Z | 2023-11-23T18:41:23Z | CONTRIBUTOR | null | Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities 🙂.
(Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6445/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6445/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6445.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6445",
"merged_at": "2023-11-23T18:41:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6445.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6444/comments | https://api.github.com/repos/huggingface/datasets/issues/6444/events | https://github.com/huggingface/datasets/pull/6444 | 2,006,842,179 | PR_kwDODunzps5gKG_e | 6,444 | Remove `Table.__getstate__` and `Table.__setstate__` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Thanks for working on this! The [issue](https://bugs.python.org/issue24658) with pickling objects larger than 4GB seems to be patched in Python 3.8 (the minimal supported version was 3.6 at the time of implementing this), so a simple solution would be removing the `Table.__setstate__` and `Table.__getstate__` over... | 2023-11-22T17:55:10Z | 2023-11-23T15:19:43Z | 2023-11-23T15:13:28Z | CONTRIBUTOR | null | When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'`
```python
from torch import distributed as dist
if dist.get_rank() == 0:
dataset = process_dataset(*args, ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6444/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6444/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6444.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6444",
"merged_at": "2023-11-23T15:13:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6444.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6443/comments | https://api.github.com/repos/huggingface/datasets/issues/6443/events | https://github.com/huggingface/datasets/issues/6443 | 2,006,568,368 | I_kwDODunzps53mc2w | 6,443 | Trouble loading files defined in YAML explicitly | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` 🙂. ",
"wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?"
] | 2023-11-22T15:18:10Z | 2023-11-23T09:06:20Z | null | CONTRIBUTOR | null | Look at https://huggingface.co/datasets/severo/doc-yaml-2
It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration
```
You can select multiple files per split using a list of paths:
my_dataset_repository/
├── README.md
├── data/
│ ├── abc.csv
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6443/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6442/comments | https://api.github.com/repos/huggingface/datasets/issues/6442/events | https://github.com/huggingface/datasets/issues/6442 | 2,006,086,907 | I_kwDODunzps53knT7 | 6,442 | Trouble loading image folder with additional features - metadata file ignored | {
"avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4",
"events_url": "https://api.github.com/users/linoytsaban/events{/privacy}",
"followers_url": "https://api.github.com/users/linoytsaban/followers",
"following_url": "https://api.github.com/users/linoytsaban/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)"
] | 2023-11-22T11:01:35Z | 2023-11-24T17:13:03Z | 2023-11-24T17:13:03Z | NONE | null | ### Describe the bug
Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions.
When loading a local image folder with captions using `datasets==2.13.0`
```
from datasets import load_dataset
data = load_dataset(<image_folder_path>)
data.column_names
```
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6442/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6441/comments | https://api.github.com/repos/huggingface/datasets/issues/6441/events | https://github.com/huggingface/datasets/issues/6441 | 2,004,985,857 | I_kwDODunzps53gagB | 6,441 | Trouble Loading a Gated Dataset For User with Granted Permission | {
"avatar_url": "https://avatars.githubusercontent.com/u/124715309?v=4",
"events_url": "https://api.github.com/users/e-trop/events{/privacy}",
"followers_url": "https://api.github.com/users/e-trop/followers",
"following_url": "https://api.github.com/users/e-trop/following{/other_user}",
"gists_url": "https://... | [] | open | false | null | [] | null | [
"> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)",
"Could you report this to https://discuss.huggingface.co/c/hub/23, providing the URL of the dataset, or at least if the dataset is public or private?"
] | 2023-11-21T19:24:36Z | 2023-11-23T09:02:04Z | null | NONE | null | ### Describe the bug
I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get
`FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the d... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6441/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6440/comments | https://api.github.com/repos/huggingface/datasets/issues/6440/events | https://github.com/huggingface/datasets/issues/6440 | 2,004,509,301 | I_kwDODunzps53emJ1 | 6,440 | `.map` not hashing under python 3.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
"Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.",
"Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/... | 2023-11-21T15:14:54Z | 2023-11-28T16:29:33Z | 2023-11-28T16:29:33Z | NONE | null | ### Describe the bug
The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message:
`Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_data... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6440/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6439/comments | https://api.github.com/repos/huggingface/datasets/issues/6439/events | https://github.com/huggingface/datasets/issues/6439 | 2,002,916,514 | I_kwDODunzps53YhSi | 6,439 | Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding | {
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}"... | [] | open | false | null | [] | null | [] | 2023-11-20T20:07:23Z | 2023-11-20T20:07:37Z | null | NONE | null | ### Describe the bug
I am working with a dataset I am trying to publish.
The path is Antreas/TALI.
It's a fairly large dataset, and contains images, video, audio and text.
I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers takin... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6439/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6438/comments | https://api.github.com/repos/huggingface/datasets/issues/6438/events | https://github.com/huggingface/datasets/issues/6438 | 2,002,032,804 | I_kwDODunzps53VJik | 6,438 | Support GeoParquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Thank you, @severo ! I would be more than happy to help in any way I can. I am not familiar with this repo's codebase, but I would be eager to contribute. :)\r\n\r\nFor the preview in Datasets Hub, I think it makes sense to just display the geospatial column as text. If there were a dataset loader, though, I think... | 2023-11-20T11:54:58Z | 2023-11-20T14:10:23Z | null | CONTRIBUTOR | null | ### Feature request
Support the GeoParquet format
### Motivation
GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns.
It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6438/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6437/comments | https://api.github.com/repos/huggingface/datasets/issues/6437/events | https://github.com/huggingface/datasets/issues/6437 | 2,001,272,606 | I_kwDODunzps53SP8e | 6,437 | Problem in training iterable dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38107672?v=4",
"events_url": "https://api.github.com/users/21Timothy/events{/privacy}",
"followers_url": "https://api.github.com/users/21Timothy/followers",
"following_url": "https://api.github.com/users/21Timothy/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | [
"Has anyone ever encountered this problem before?",
"`split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the nu... | 2023-11-20T03:04:02Z | 2023-11-29T11:11:15Z | null | NONE | null | ### Describe the bug
I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have notice... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6437/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6436/comments | https://api.github.com/repos/huggingface/datasets/issues/6436/events | https://github.com/huggingface/datasets/issues/6436 | 2,000,844,474 | I_kwDODunzps53Qna6 | 6,436 | TypeError: <lambda>() takes 0 positional arguments but 1 was given | {
"avatar_url": "https://avatars.githubusercontent.com/u/47111429?v=4",
"events_url": "https://api.github.com/users/ahmadmustafaanis/events{/privacy}",
"followers_url": "https://api.github.com/users/ahmadmustafaanis/followers",
"following_url": "https://api.github.com/users/ahmadmustafaanis/following{/other_use... | [] | closed | false | null | [] | null | [
"This looks like a problem with your environment rather than `datasets`."
] | 2023-11-19T13:10:20Z | 2023-11-29T16:28:34Z | 2023-11-29T16:28:34Z | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import Dataset
9 frames
[/usr/lo... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6436/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6435/comments | https://api.github.com/repos/huggingface/datasets/issues/6435/events | https://github.com/huggingface/datasets/issues/6435 | 2,000,690,513 | I_kwDODunzps53QB1R | 6,435 | Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method | {
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"[This doc section](https://huggingface.co/docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.",
"@mariosasko thank you very much, i'll check it"
] | 2023-11-19T04:21:16Z | 2023-12-04T16:57:44Z | 2023-12-04T16:57:43Z | NONE | null | ### Describe the bug
1. I ran dataset mapping with `num_proc=6` in it and got this error:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
I can't actually find a way to run multi-GPU dataset mapping. Can you help?
### Steps to... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6435/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6434/comments | https://api.github.com/repos/huggingface/datasets/issues/6434/events | https://github.com/huggingface/datasets/pull/6434 | 1,999,554,915 | PR_kwDODunzps5fxgUO | 6,434 | Use `ruff` for formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-17T16:53:22Z | 2023-11-21T14:19:21Z | 2023-11-21T14:13:13Z | CONTRIBUTOR | null | Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6434/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6434.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6434",
"merged_at": "2023-11-21T14:13:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6434.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6433/comments | https://api.github.com/repos/huggingface/datasets/issues/6433/events | https://github.com/huggingface/datasets/pull/6433 | 1,999,419,105 | PR_kwDODunzps5fxDoG | 6,433 | Better `tqdm` wrapper | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-17T15:45:15Z | 2023-11-22T16:48:18Z | 2023-11-22T16:42:08Z | CONTRIBUTOR | null | This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone.
Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`.
Fix #6409 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6433/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6433.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6433",
"merged_at": "2023-11-22T16:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6433.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6432/comments | https://api.github.com/repos/huggingface/datasets/issues/6432/events | https://github.com/huggingface/datasets/issues/6432 | 1,999,258,140 | I_kwDODunzps53KkIc | 6,432 | load_dataset does not load all of the data in my input file | {
"avatar_url": "https://avatars.githubusercontent.com/u/121301001?v=4",
"events_url": "https://api.github.com/users/demongolem-biz2/events{/privacy}",
"followers_url": "https://api.github.com/users/demongolem-biz2/followers",
"following_url": "https://api.github.com/users/demongolem-biz2/following{/other_user}... | [] | open | false | null | [] | null | [
"You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves."
] | 2023-11-17T14:28:50Z | 2023-11-22T17:34:58Z | null | NONE | null | ### Describe the bug
I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements.
### Steps to reproduce the bug
train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset(data_... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6432/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6431/comments | https://api.github.com/repos/huggingface/datasets/issues/6431/events | https://github.com/huggingface/datasets/pull/6431 | 1,997,202,770 | PR_kwDODunzps5fpfos | 6,431 | Create DatasetNotFoundError and DataFilesNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-16T16:02:55Z | 2023-11-22T15:18:51Z | 2023-11-22T15:12:33Z | MEMBER | null | Create `DatasetNotFoundError` and `DataFilesNotFoundError`.
Fix #6397.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6431/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6431",
"merged_at": "2023-11-22T15:12:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6429/comments | https://api.github.com/repos/huggingface/datasets/issues/6429/events | https://github.com/huggingface/datasets/pull/6429 | 1,996,723,698 | PR_kwDODunzps5fn1r_ | 6,429 | Add trust_remote_code argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-16T12:12:54Z | 2023-11-28T16:10:39Z | 2023-11-28T16:03:43Z | MEMBER | null | Draft about adding `trust_remote_code` to `load_dataset`.
```python
ds = load_dataset(..., trust_remote_code=True) # run remote code (current default)
```
It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate o... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6429/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6429",
"merged_at": "2023-11-28T16:03:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6428/comments | https://api.github.com/repos/huggingface/datasets/issues/6428/events | https://github.com/huggingface/datasets/pull/6428 | 1,996,306,394 | PR_kwDODunzps5fmakS | 6,428 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6428). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-16T08:12:55Z | 2023-11-16T08:19:39Z | 2023-11-16T08:13:28Z | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6428/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6428.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6428",
"merged_at": "2023-11-16T08:13:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6428.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6427/comments | https://api.github.com/repos/huggingface/datasets/issues/6427/events | https://github.com/huggingface/datasets/pull/6427 | 1,996,248,605 | PR_kwDODunzps5fmN1_ | 6,427 | Release: 2.15.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-16T07:37:20Z | 2023-11-16T08:12:12Z | 2023-11-16T07:43:05Z | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6427/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6427",
"merged_at": "2023-11-16T07:43:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6426/comments | https://api.github.com/repos/huggingface/datasets/issues/6426/events | https://github.com/huggingface/datasets/pull/6426 | 1,995,363,264 | PR_kwDODunzps5fjOEK | 6,426 | More robust temporary directory deletion | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6426). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-15T19:06:42Z | 2023-12-01T15:37:32Z | 2023-12-01T15:31:19Z | CONTRIBUTOR | null | While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler appro... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6426/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6426.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6426",
"merged_at": "2023-12-01T15:31:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6426.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6425/comments | https://api.github.com/repos/huggingface/datasets/issues/6425/events | https://github.com/huggingface/datasets/pull/6425 | 1,995,269,382 | PR_kwDODunzps5fi5ye | 6,425 | Fix deprecation warning when building conda package | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-15T18:00:11Z | 2023-11-15T18:05:11Z | null | MEMBER | null | When building/releasing conda package, we get this deprecation warning:
```
/usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead.
```
This PR fixes the deprecation warning by using `co... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6425/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6425/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6425.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6425",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6425.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6425"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6424/comments | https://api.github.com/repos/huggingface/datasets/issues/6424/events | https://github.com/huggingface/datasets/pull/6424 | 1,995,224,516 | PR_kwDODunzps5fiwDC | 6,424 | [docs] troubleshooting guide | {
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6424). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-15T17:28:14Z | 2023-11-30T17:29:55Z | 2023-11-30T17:23:46Z | CONTRIBUTOR | null | Hi all! This is a PR adding a troubleshooting guide for Datasets docs.
I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are:
- creating a dataset from a folder and n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6424/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6424",
"merged_at": "2023-11-30T17:23:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6423/comments | https://api.github.com/repos/huggingface/datasets/issues/6423/events | https://github.com/huggingface/datasets/pull/6423 | 1,994,946,847 | PR_kwDODunzps5fhzD6 | 6,423 | Fix conda release by adding pyarrow-hotfix dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-15T14:57:12Z | 2023-11-15T17:15:33Z | 2023-11-15T17:09:24Z | MEMBER | null | Fix conda release by adding pyarrow-hotfix dependency.
Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6423/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6423/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6423.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6423",
"merged_at": "2023-11-15T17:09:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6423.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6422/comments | https://api.github.com/repos/huggingface/datasets/issues/6422/events | https://github.com/huggingface/datasets/issues/6422 | 1,994,579,267 | I_kwDODunzps524t1D | 6,422 | Allow to choose the `writer_batch_size` when using `save_to_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/38216711?v=4",
"events_url": "https://api.github.com/users/NathanGodey/events{/privacy}",
"followers_url": "https://api.github.com/users/NathanGodey/followers",
"following_url": "https://api.github.com/users/NathanGodey/following{/other_user}",
"gists_u... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```",
"Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/... | 2023-11-15T11:18:34Z | 2023-11-16T10:00:21Z | null | NONE | null | ### Feature request
Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods.
### Motivation
The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in R... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6422/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6421/comments | https://api.github.com/repos/huggingface/datasets/issues/6421/events | https://github.com/huggingface/datasets/pull/6421 | 1,994,451,553 | PR_kwDODunzps5fgG1h | 6,421 | Add pyarrow-hotfix to release docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-15T10:06:44Z | 2023-11-15T13:49:55Z | 2023-11-15T13:38:22Z | MEMBER | null | Add `pyarrow-hotfix` to release docs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6421/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6421",
"merged_at": "2023-11-15T13:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6420/comments | https://api.github.com/repos/huggingface/datasets/issues/6420/events | https://github.com/huggingface/datasets/pull/6420 | 1,994,278,903 | PR_kwDODunzps5ffhdi | 6,420 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6420). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-15T08:22:19Z | 2023-11-15T08:33:36Z | 2023-11-15T08:22:33Z | MEMBER | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6420/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6420/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6420",
"merged_at": "2023-11-15T08:22:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6419/comments | https://api.github.com/repos/huggingface/datasets/issues/6419/events | https://github.com/huggingface/datasets/pull/6419 | 1,994,257,873 | PR_kwDODunzps5ffc7d | 6,419 | Release: 2.14.7 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-15T08:07:37Z | 2023-11-15T17:35:30Z | 2023-11-15T08:12:59Z | MEMBER | null | Release 2.14.7. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6419/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6419.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6419",
"merged_at": "2023-11-15T08:12:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6419.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6418/comments | https://api.github.com/repos/huggingface/datasets/issues/6418/events | https://github.com/huggingface/datasets/pull/6418 | 1,993,224,629 | PR_kwDODunzps5fb7lu | 6,418 | Remove token value from warnings | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-11-14T17:34:06Z | 2023-11-14T22:26:04Z | 2023-11-14T22:19:45Z | CONTRIBUTOR | null | Fix #6412 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6418/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6418/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6418.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6418",
"merged_at": "2023-11-14T22:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6418.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6417/comments | https://api.github.com/repos/huggingface/datasets/issues/6417/events | https://github.com/huggingface/datasets/issues/6417 | 1,993,149,416 | I_kwDODunzps52zQvo | 6,417 | Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on b... | 2023-11-14T16:53:20Z | 2023-11-16T20:23:41Z | 2023-11-16T20:23:41Z | NONE | null | ### Describe the bug
Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab.
**Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb
**Error**: `ValueError: Arrow type extensi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6417/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6416/comments | https://api.github.com/repos/huggingface/datasets/issues/6416/events | https://github.com/huggingface/datasets/pull/6416 | 1,992,954,723 | PR_kwDODunzps5fbA4H | 6,416 | Rename audio_classificiation.py to audio_classification.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/1595907?v=4",
"events_url": "https://api.github.com/users/carlthome/events{/privacy}",
"followers_url": "https://api.github.com/users/carlthome/followers",
"following_url": "https://api.github.com/users/carlthome/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
"Oh good catch. Can you also rename it in `src/datasets/tasks/__init__.py` ?",
"Fixed! \r\n\r\n(I think, tough word to spell right TBH)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show... | 2023-11-14T15:15:29Z | 2023-11-15T11:59:32Z | 2023-11-15T11:53:20Z | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6416/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6416",
"merged_at": "2023-11-15T11:53:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6415/comments | https://api.github.com/repos/huggingface/datasets/issues/6415/events | https://github.com/huggingface/datasets/pull/6415 | 1,992,917,248 | PR_kwDODunzps5fa4n7 | 6,415 | Fix multi gpu map example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-14T14:57:18Z | 2023-11-22T15:48:27Z | 2023-11-22T15:42:19Z | MEMBER | null | - use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES `
- add `if __name__ == "__main__"`
fix https://github.com/huggingface/datasets/issues/6186 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6415/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6415.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6415",
"merged_at": "2023-11-22T15:42:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6415.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6414/comments | https://api.github.com/repos/huggingface/datasets/issues/6414/events | https://github.com/huggingface/datasets/pull/6414 | 1,992,482,491 | PR_kwDODunzps5fZZ2l | 6,414 | Set `usedforsecurity=False` in hashlib methods (FIPS compliance) | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-14T10:47:09Z | 2023-11-17T14:23:20Z | 2023-11-17T14:17:00Z | CONTRIBUTOR | null | Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782.
**TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6414/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6414/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6414.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6414",
"merged_at": "2023-11-17T14:17:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6414.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6412/comments | https://api.github.com/repos/huggingface/datasets/issues/6412/events | https://github.com/huggingface/datasets/issues/6412 | 1,992,401,594 | I_kwDODunzps52waK6 | 6,412 | User token is printed out! | {
"avatar_url": "https://avatars.githubusercontent.com/u/25702692?v=4",
"events_url": "https://api.github.com/users/mohsen-goodarzi/events{/privacy}",
"followers_url": "https://api.github.com/users/mohsen-goodarzi/followers",
"following_url": "https://api.github.com/users/mohsen-goodarzi/following{/other_user}"... | [] | closed | false | null | [] | null | [
"Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning."
] | 2023-11-14T10:01:34Z | 2023-11-14T22:19:46Z | 2023-11-14T22:19:46Z | NONE | null | This line prints user token on command line! Is it safe?
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6412/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6411/comments | https://api.github.com/repos/huggingface/datasets/issues/6411/events | https://github.com/huggingface/datasets/pull/6411 | 1,992,386,630 | PR_kwDODunzps5fZE9F | 6,411 | Fix dependency conflict within CI build documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-11-14T09:52:51Z | 2023-11-14T10:05:59Z | 2023-11-14T10:05:35Z | MEMBER | null | Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`).
This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`.
Fix #6406. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6411/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6411",
"merged_at": "2023-11-14T10:05:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6410/comments | https://api.github.com/repos/huggingface/datasets/issues/6410/events | https://github.com/huggingface/datasets/issues/6410 | 1,992,100,209 | I_kwDODunzps52vQlx | 6,410 | Datasets does not load HuggingFace Repository properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/40600201?v=4",
"events_url": "https://api.github.com/users/MikeDoes/events{/privacy}",
"followers_url": "https://api.github.com/users/MikeDoes/followers",
"following_url": "https://api.github.com/users/MikeDoes/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [
"Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should... | 2023-11-14T06:50:49Z | 2023-11-16T06:54:36Z | null | NONE | null | ### Describe the bug
Dear Datasets team,
We just have published a dataset on Huggingface:
https://huggingface.co/ai4privacy
However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me ... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6410/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6409/comments | https://api.github.com/repos/huggingface/datasets/issues/6409/events | https://github.com/huggingface/datasets/issues/6409 | 1,991,960,865 | I_kwDODunzps52uukh | 6,409 | using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [] | 2023-11-14T04:21:01Z | 2023-11-22T16:42:09Z | 2023-11-22T16:42:09Z | NONE | null | ### Describe the bug
i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows:
`AttributeError: 'function' object has no attribute 'close'
Exception ignored in: <function TqdmCallback.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6409/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6408/comments | https://api.github.com/repos/huggingface/datasets/issues/6408/events | https://github.com/huggingface/datasets/issues/6408 | 1,991,902,972 | I_kwDODunzps52ugb8 | 6,408 | `IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns` | {
"avatar_url": "https://avatars.githubusercontent.com/u/24571857?v=4",
"events_url": "https://api.github.com/users/shmily326/events{/privacy}",
"followers_url": "https://api.github.com/users/shmily326/followers",
"following_url": "https://api.github.com/users/shmily326/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | [] | 2023-11-14T03:12:08Z | 2023-11-16T06:24:10Z | null | NONE | null | ### Describe the bug
IterableDataset lost but not keep columns when map function adding columns with names in remove_columns,
Dataset not.
May be related to the code below:
https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756
### Steps t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6408/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6407/comments | https://api.github.com/repos/huggingface/datasets/issues/6407/events | https://github.com/huggingface/datasets/issues/6407 | 1,991,514,079 | I_kwDODunzps52tBff | 6,407 | Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"events_url": "https://api.github.com/users/eawer/events{/privacy}",
"followers_url": "https://api.github.com/users/eawer/followers",
"following_url": "https://api.github.com/users/eawer/following{/other_user}",
"gists_url": "https://api.g... | [] | open | false | null | [] | null | [] | 2023-11-13T21:27:43Z | 2023-11-13T21:27:43Z | null | NONE | null | ### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6407/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6406/comments | https://api.github.com/repos/huggingface/datasets/issues/6406/events | https://github.com/huggingface/datasets/issues/6406 | 1,990,469,045 | I_kwDODunzps52pCW1 | 6,406 | CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2023-11-13T11:36:10Z | 2023-11-14T10:05:36Z | 2023-11-14T10:05:36Z | MEMBER | null | Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6406/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6405/comments | https://api.github.com/repos/huggingface/datasets/issues/6405/events | https://github.com/huggingface/datasets/issues/6405 | 1,990,358,743 | I_kwDODunzps52onbX | 6,405 | ConfigNamesError on a simple CSV file | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)",
"Feel free to close the issue",
"Oh, OK! Thanks. So, there was no reason to o... | 2023-11-13T10:28:29Z | 2023-11-13T20:01:24Z | 2023-11-13T20:01:24Z | CONTRIBUTOR | null | See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1
```
Error code: ConfigNamesError
Exception: TypeError
Message: __init__() missing 1 required positional argument: 'dtype'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runn... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6405/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6404/comments | https://api.github.com/repos/huggingface/datasets/issues/6404/events | https://github.com/huggingface/datasets/pull/6404 | 1,990,211,901 | PR_kwDODunzps5fRrJ- | 6,404 | Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-13T09:15:39Z | 2023-11-14T10:29:48Z | 2023-11-14T10:23:29Z | MEMBER | null | Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm).
Fix #6396. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6404/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6404",
"merged_at": "2023-11-14T10:23:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.",
"okay, it works! thank you so much! 😄 "
] | 2023-11-13T08:14:43Z | 2023-11-16T05:04:22Z | 2023-11-16T05:04:21Z | NONE | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)
 instead of (H, W, C). See #6394 for motivation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6402/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6402.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6402",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6402.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6402"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6401/comments | https://api.github.com/repos/huggingface/datasets/issues/6401/events | https://github.com/huggingface/datasets/issues/6401 | 1,988,710,061 | I_kwDODunzps52iU6t | 6,401 | dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ ",
"@VarunNSrivastava thanks br... | 2023-11-11T04:09:07Z | 2023-11-20T17:45:20Z | 2023-11-20T17:45:20Z | NONE | null | ### Describe the bug
```
(datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py
Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s]
Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s]
Downloading data: 100... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6400/comments | https://api.github.com/repos/huggingface/datasets/issues/6400/events | https://github.com/huggingface/datasets/issues/6400 | 1,988,571,317 | I_kwDODunzps52hzC1 | 6,400 | Safely load datasets by disabling execution of dataset loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4",
"events_url": "https://api.github.com/users/irenedea/events{/privacy}",
"followers_url": "https://api.github.com/users/irenedea/followers",
"following_url": "https://api.github.com/users/irenedea/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)",
"@julien-c that would be great!"
] | 2023-11-10T23:48:29Z | 2023-11-15T14:46:43Z | null | NONE | null | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code e... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6400/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6399/comments | https://api.github.com/repos/huggingface/datasets/issues/6399/events | https://github.com/huggingface/datasets/issues/6399 | 1,988,368,503 | I_kwDODunzps52hBh3 | 6,399 | TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array | {
"avatar_url": "https://avatars.githubusercontent.com/u/76236359?v=4",
"events_url": "https://api.github.com/users/y-hwang/events{/privacy}",
"followers_url": "https://api.github.com/users/y-hwang/followers",
"following_url": "https://api.github.com/users/y-hwang/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [] | 2023-11-10T20:48:46Z | 2023-11-10T20:48:46Z | null | NONE | null | ### Describe the bug
Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets.
Thank you!
### Steps to repro... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6399/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6398/comments | https://api.github.com/repos/huggingface/datasets/issues/6398/events | https://github.com/huggingface/datasets/pull/6398 | 1,987,786,446 | PR_kwDODunzps5fJlP7 | 6,398 | Remove redundant condition in builders | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-10T14:56:43Z | 2023-11-14T10:49:15Z | 2023-11-14T10:43:00Z | MEMBER | null | Minor refactoring to remove redundant condition. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6398/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6398",
"merged_at": "2023-11-14T10:43:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6397/comments | https://api.github.com/repos/huggingface/datasets/issues/6397/events | https://github.com/huggingface/datasets/issues/6397 | 1,987,622,152 | I_kwDODunzps52eLUI | 6,397 | Raise a different exception for inexisting dataset vs files without known extension | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [] | 2023-11-10T13:22:14Z | 2023-11-22T15:12:34Z | 2023-11-22T15:12:34Z | CONTRIBUTOR | null | See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557
We have the same error for:
- https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist
- https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6397/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6396/comments | https://api.github.com/repos/huggingface/datasets/issues/6396/events | https://github.com/huggingface/datasets/issues/6396 | 1,987,308,077 | I_kwDODunzps52c-ot | 6,396 | Issue with pyarrow 14.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [
"Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf",
"https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled ... | 2023-11-10T10:02:12Z | 2023-11-14T10:23:30Z | 2023-11-14T10:23:30Z | CONTRIBUTOR | null | See https://github.com/huggingface/datasets-server/pull/2089 for reference
```
from datasets import (Array2D, Dataset, Features)
feature_type = Array2D(shape=(2, 2), dtype="float32")
content = [[0.0, 0.0], [0.0, 0.0]]
features = Features({"col": feature_type})
dataset = Dataset.from_dict({"col": [content]}, fea... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6396/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6396/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6395/comments | https://api.github.com/repos/huggingface/datasets/issues/6395/events | https://github.com/huggingface/datasets/issues/6395 | 1,986,484,124 | I_kwDODunzps52Z1ec | 6,395 | Add ability to set lock type | {
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"We've replaced our filelock implementation with the `filelock` package, so their repo is the right place to request this feature.\r\n\r\nIn the meantime, the following should work: \r\n```python\r\nimport filelock\r\nfilelock.FileLock = filelock.SoftFileLock\r\n\r\nimport datasets\r\n...\r\n```"
] | 2023-11-09T22:12:30Z | 2023-11-23T18:50:00Z | 2023-11-23T18:50:00Z | NONE | null | ### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6395/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6394/comments | https://api.github.com/repos/huggingface/datasets/issues/6394/events | https://github.com/huggingface/datasets/issues/6394 | 1,985,947,116 | I_kwDODunzps52XyXs | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. "
] | 2023-11-09T16:02:15Z | 2023-11-11T19:41:03Z | null | NONE | null | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6394/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6393/comments | https://api.github.com/repos/huggingface/datasets/issues/6393/events | https://github.com/huggingface/datasets/issues/6393 | 1,984,913,259 | I_kwDODunzps52T19r | 6,393 | Filter occasionally hangs | {
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"events_url": "https://api.github.com/users/dakinggg/events{/privacy}",
"followers_url": "https://api.github.com/users/dakinggg/followers",
"following_url": "https://api.github.com/users/dakinggg/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [
"It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172",
"Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.",
"More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was ... | 2023-11-09T06:18:30Z | 2023-11-21T17:39:26Z | null | NONE | null | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6393/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6392/comments | https://api.github.com/repos/huggingface/datasets/issues/6392/events | https://github.com/huggingface/datasets/issues/6392 | 1,984,369,545 | I_kwDODunzps52RxOJ | 6,392 | `push_to_hub` is not robust to hub closing connection | {
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub`... | 2023-11-08T20:44:53Z | 2023-12-01T17:51:34Z | 2023-12-01T17:51:34Z | NONE | null | ### Describe the bug
Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error:
```
Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it]
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6392/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6391/comments | https://api.github.com/repos/huggingface/datasets/issues/6391/events | https://github.com/huggingface/datasets/pull/6391 | 1,984,091,776 | PR_kwDODunzps5e9BDO | 6,391 | Webdataset dataset builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added an error message if the first examples don't appear to be in webdataset format\r\n```\r\n\"The TAR archives of the dataset should be in Webdataset format, \"\r\n\"but the files in the archive don't share the same prefix or th... | 2023-11-08T17:31:59Z | 2023-11-28T16:33:33Z | 2023-11-28T16:33:10Z | MEMBER | null | Allow `load_dataset` to support the Webdataset format.
It allows users to download/stream data from local files or from the Hugging Face Hub.
Moreover it will enable the Dataset Viewer for Webdataset datasets on HF.
## Implementation details
- I added a new Webdataset builder
- dataset with TAR files are n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6391/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6391/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6391.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6391",
"merged_at": "2023-11-28T16:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6391.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6390/comments | https://api.github.com/repos/huggingface/datasets/issues/6390/events | https://github.com/huggingface/datasets/pull/6390 | 1,983,725,707 | PR_kwDODunzps5e7xQ3 | 6,390 | handle future deprecation argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-08T14:21:25Z | 2023-11-21T02:10:24Z | 2023-11-14T15:15:59Z | CONTRIBUTOR | null | getting this error:
```
/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'.
return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)
```
Since datasets supports arrow greater than 8.0.0, we need to handle both ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6390/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6390",
"merged_at": "2023-11-14T15:15:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/6389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6389/comments | https://api.github.com/repos/huggingface/datasets/issues/6389/events | https://github.com/huggingface/datasets/issues/6389 | 1,983,545,744 | I_kwDODunzps52OoGQ | 6,389 | Index 339 out of range for dataset of size 339 <-- save_to_file() | {
"avatar_url": "https://avatars.githubusercontent.com/u/20318973?v=4",
"events_url": "https://api.github.com/users/jaggzh/events{/privacy}",
"followers_url": "https://api.github.com/users/jaggzh/followers",
"following_url": "https://api.github.com/users/jaggzh/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | null | [
"Hi! Can you make the above reproducer self-contained by adding code that generates the data?",
"I managed a workaround eventually but I don't know what it was (I made a lot of changes to seq2seq). I'll try to include generating code in the future. (If I close, I don't know if you see it. Feel free to close; I'l... | 2023-11-08T12:52:09Z | 2023-11-24T09:14:13Z | null | NONE | null | ### Describe the bug
When saving out some Audio() data.
The data is audio recordings with associated 'sentences'.
(They use the audio 'bytes' approach because they're clips within audio files).
Code is below the traceback (I can't upload the voice audio/text (it's not even me)).
```
Traceback (most recent call ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6389/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6388/comments | https://api.github.com/repos/huggingface/datasets/issues/6388/events | https://github.com/huggingface/datasets/issues/6388 | 1,981,136,093 | I_kwDODunzps52Fbzd | 6,388 | How to create 3d medical imgae dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/41177312?v=4",
"events_url": "https://api.github.com/users/QingYunA/events{/privacy}",
"followers_url": "https://api.github.com/users/QingYunA/followers",
"following_url": "https://api.github.com/users/QingYunA/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2023-11-07T11:27:36Z | 2023-11-07T11:28:53Z | null | NONE | null | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6388/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6387/comments | https://api.github.com/repos/huggingface/datasets/issues/6387/events | https://github.com/huggingface/datasets/issues/6387 | 1,980,224,020 | I_kwDODunzps52B9IU | 6,387 | How to load existing downloaded dataset ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"events_url": "https://api.github.com/users/liming-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/liming-ai/followers",
"following_url": "https://api.github.com/users/liming-ai/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`"
] | 2023-11-06T22:51:44Z | 2023-11-16T18:07:01Z | 2023-11-16T18:07:01Z | NONE | null | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6387/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6386/comments | https://api.github.com/repos/huggingface/datasets/issues/6386/events | https://github.com/huggingface/datasets/issues/6386 | 1,979,878,014 | I_kwDODunzps52Aop- | 6,386 | Formatting overhead | {
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | [
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] | 2023-11-06T19:06:38Z | 2023-11-06T23:56:12Z | 2023-11-06T23:56:12Z | NONE | null | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6386/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6385/comments | https://api.github.com/repos/huggingface/datasets/issues/6385/events | https://github.com/huggingface/datasets/issues/6385 | 1,979,308,338 | I_kwDODunzps51-dky | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
"The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squ... | 2023-11-06T14:29:22Z | 2023-11-06T16:50:45Z | 2023-11-06T16:50:45Z | NONE | null | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\ana... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6385/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6384/comments | https://api.github.com/repos/huggingface/datasets/issues/6384/events | https://github.com/huggingface/datasets/issues/6384 | 1,979,117,069 | I_kwDODunzps519u4N | 6,384 | Load the local dataset folder from other place | {
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"Solved"
] | 2023-11-06T13:07:04Z | 2023-11-19T05:42:06Z | 2023-11-19T05:42:05Z | NONE | null | This is from https://github.com/huggingface/diffusers/issues/5573
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6384/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6384/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6383/comments | https://api.github.com/repos/huggingface/datasets/issues/6383/events | https://github.com/huggingface/datasets/issues/6383 | 1,978,189,389 | I_kwDODunzps516MZN | 6,383 | imagenet-1k downloads over and over | {
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"events_url": "https://api.github.com/users/seann999/events{/privacy}",
"followers_url": "https://api.github.com/users/seann999/followers",
"following_url": "https://api.github.com/users/seann999/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2023-11-06T02:58:58Z | 2023-11-06T06:02:39Z | 2023-11-06T06:02:39Z | NONE | null | ### Describe the bug
What could be causing this?
```
$ python3
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset("imagenet-1k")
Downloading builder ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6383/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6382/comments | https://api.github.com/repos/huggingface/datasets/issues/6382/events | https://github.com/huggingface/datasets/issues/6382 | 1,977,400,799 | I_kwDODunzps513L3f | 6,382 | Add CheXpert dataset for vision | {
"avatar_url": "https://avatars.githubusercontent.com/u/61241031?v=4",
"events_url": "https://api.github.com/users/SauravMaheshkar/events{/privacy}",
"followers_url": "https://api.github.com/users/SauravMaheshkar/followers",
"following_url": "https://api.github.com/users/SauravMaheshkar/following{/other_user}"... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "e99695",
"default": fals... | open | false | null | [] | null | [
"Hey @SauravMaheshkar ! Just responded to your email.\r\n\r\n_For transparency, copying part of my response here:_\r\nI agree, it would be really great to have this and other BenchMD datasets easily accessible on the hub.\r\n\r\nI think the main limiting factor is that the ChexPert dataset is currently hosted on th... | 2023-11-04T15:36:11Z | 2023-12-09T15:14:42Z | null | NONE | null | ### Feature request
### Name
**CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison**
### Paper
https://arxiv.org/abs/1901.07031
### Data
https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2
### Motivation
CheXpert is one of the fund... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6382/timeline | null | null | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.