url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/5272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5272/comments
https://api.github.com/repos/huggingface/datasets/issues/5272/events
https://github.com/huggingface/datasets/issues/5272
1,456,940,021
I_kwDODunzps5W1yP1
5,272
Use pyarrow Tensor dtype
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
15
2022-11-20T15:18:41Z
2023-07-04T04:57:50Z
null
null
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"]) ``` [Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html) Maybe this belongs into the pyarrow features / repo. ### Motivation Working with big data, we need to make sure to use the best data structures and IO out there ### Your contribution Can try to a PR if code changes necessary
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5272/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5272/timeline
null
null
null
null
false
[ "Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694", "@wesm @rok its b...
https://api.github.com/repos/huggingface/datasets/issues/1574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1574/comments
https://api.github.com/repos/huggingface/datasets/issues/1574/events
https://github.com/huggingface/datasets/pull/1574
767,015,317
MDExOlB1bGxSZXF1ZXN0NTM5ODY1Mzcy
1,574
Diplomacy detection 3
[]
closed
false
null
0
2020-12-14T23:28:51Z
2020-12-14T23:29:32Z
2020-12-14T23:29:32Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1574/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1574/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1574.diff", "html_url": "https://github.com/huggingface/datasets/pull/1574", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1574.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1574" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2137/comments
https://api.github.com/repos/huggingface/datasets/issues/2137/events
https://github.com/huggingface/datasets/pull/2137
843,502,835
MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw
2,137
Fix missing infos from concurrent dataset loading
[]
closed
false
null
0
2021-03-29T15:46:12Z
2021-03-31T10:35:56Z
2021-03-31T10:35:55Z
null
This should fix issue #2131 When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2137/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2137.diff", "html_url": "https://github.com/huggingface/datasets/pull/2137", "merged_at": "2021-03-31T10:35:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2137.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2137" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2904/comments
https://api.github.com/repos/huggingface/datasets/issues/2904/events
https://github.com/huggingface/datasets/issues/2904
995,814,222
I_kwDODunzps47WutO
2,904
FORCE_REDOWNLOAD does not work
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
3
2021-09-14T09:45:26Z
2021-10-06T09:37:19Z
null
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2904/timeline
null
null
null
null
false
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompresse...
https://api.github.com/repos/huggingface/datasets/issues/4673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4673/comments
https://api.github.com/repos/huggingface/datasets/issues/4673/events
https://github.com/huggingface/datasets/issues/4673
1,301,010,331
I_kwDODunzps5Ni9eb
4,673
load_datasets on csv returns everything as a string
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-07-11T17:30:24Z
2022-07-12T13:33:09Z
2022-07-12T13:33:08Z
null
## Describe the bug If you use: `conll_dataset.to_csv("ner_conll.csv")` It will create a csv file with all of your data as expected, however when you load it with: `conll_dataset = load_dataset("csv", data_files="ner_conll.csv")` everything is read in as a string. For example if I look at everything in 'ner_tags' I get back `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` instead of what I originally saved which was `[[3, 0, 7, 0, 0, 0, 7, 0, 0], [1, 2], [5, 0]]` I think maybe there is something funky going on with the csv delimiter ## Steps to reproduce the bug ```python # Sample code to reproduce the bug #load original conll dataset orig_conll = load_dataset("conll2003") #save original conll as a csv orig_conll.to_csv("ner_conll.csv") #reload conll data as a csv new_conll = load_dataset("csv", data_files="ner_conll.csv")` ``` ## Expected results A clear and concise description of the expected results. I would expect the data be returned as the data type I saved it as. I.e. if I save a list of ints [[3, 0, 7, 0, 0, 0, 7, 0, 0]], I shouldnt get back a string ['[3 0 7 0 0 0 7 0 0]'] I also get back a string when I pass a list of strings ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.'] ## Actual results A list of strings `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` A string "['EU' 'rejects' 'German' 'call' 'to' 'boycott' 'British' 'lamb' '.']" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4673/timeline
null
completed
null
null
false
[ "Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the \"csv\" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to la...
https://api.github.com/repos/huggingface/datasets/issues/6065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6065/comments
https://api.github.com/repos/huggingface/datasets/issues/6065/events
https://github.com/huggingface/datasets/pull/6065
1,819,334,932
PR_kwDODunzps5WR8jI
6,065
Add column type guessing from map return function
[]
closed
false
null
5
2023-07-25T00:34:17Z
2023-07-26T15:13:45Z
2023-07-26T15:13:44Z
null
As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset. This PR provides an alternative approach, which is functionally equivalent to specifying features but a bit cleaner within a larger mapping pipeline. It allows clients to typehint the return variable coming from the mapper function - if we find one of these type annotations specified, and no explicit features have been passed in, we'll try to convert it into a Features map. If the map function runs and casting is unable to succeed, it will raise a DatasetTransformationNotAllowedError that indicates the typehint may be to blame. It works for batched and non-batched mapping functions. Currently supported column types: - builtins primitives: string, int, float, bool - dictionaries, lists (nested and one-deep) - Optional types and None-Unions (synonymous with optional types) It's used like: ```python class DatasetTyped(TypedDict): texts: list[str] def dataset_typed_map(batch) -> DatasetTyped: return {"texts": [text.split() for text in batch["raw_text"]]} dataset = {"raw_text": ["", "This is a test", "This is another test"]} with Dataset.from_dict(dataset) as dset: new_dataset = dset.map( dataset_typed_map, batched=True, batch_size=1, num_proc=1, ) ``` Open questions: - Should logging indicate we have automatically guessed these types? Or proceed quietly until we hit an error (as is the current implementation).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6065/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6065/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6065.diff", "html_url": "https://github.com/huggingface/datasets/pull/6065", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6065.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6065" }
true
[ "Thanks for working on this. However, having thought about this issue a bit more, supporting this doesn't seem like a good idea - it's better to be explicit than implicit, according to the Zen of Python 🙂. Also, I don't think many users would use this, so this raises the question of whether this is something we wa...
https://api.github.com/repos/huggingface/datasets/issues/4060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4060/comments
https://api.github.com/repos/huggingface/datasets/issues/4060/events
https://github.com/huggingface/datasets/pull/4060
1,186,281,033
PR_kwDODunzps41Tbmg
4,060
Deprecate canonical Multilingual Librispeech
[]
closed
false
null
7
2022-03-30T10:56:56Z
2022-04-01T12:54:05Z
2022-04-01T12:48:51Z
null
Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming. However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org. Hm, and the code should be change after the release, no?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4060/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4060.diff", "html_url": "https://github.com/huggingface/datasets/pull/4060", "merged_at": "2022-04-01T12:48:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4060.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4060" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, as discussed in #4006 we should update facebook/multilingual_librispeech indeed before we do a release. @anton-l could you help taking care of updating facebook/multilingual_librispeech ? We need to update the task template\r\n`...
https://api.github.com/repos/huggingface/datasets/issues/162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/162/comments
https://api.github.com/repos/huggingface/datasets/issues/162/events
https://github.com/huggingface/datasets/pull/162
620,513,554
MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky
162
fix prev files hash in map
[]
closed
false
null
3
2020-05-18T21:20:51Z
2020-05-18T21:36:21Z
2020-05-18T21:36:20Z
null
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/162/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/162/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/162.diff", "html_url": "https://github.com/huggingface/datasets/pull/162", "merged_at": "2020-05-18T21:36:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/162.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/162" }
true
[ "Awesome! ", "Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified", "Perfect then :)" ]
https://api.github.com/repos/huggingface/datasets/issues/5992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5992/comments
https://api.github.com/repos/huggingface/datasets/issues/5992/events
https://github.com/huggingface/datasets/pull/5992
1,776,460,964
PR_kwDODunzps5UAk3C
5,992
speedup
[]
closed
false
null
1
2023-06-27T09:17:58Z
2023-06-27T09:23:07Z
2023-06-27T09:18:04Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5992/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/5992.diff", "html_url": "https://github.com/huggingface/datasets/pull/5992", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5992.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5992" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5992). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/133/comments
https://api.github.com/repos/huggingface/datasets/issues/133/events
https://github.com/huggingface/datasets/issues/133
619,094,954
MDU6SXNzdWU2MTkwOTQ5NTQ=
133
[Question] Using/adding a local dataset
[]
closed
false
null
5
2020-05-15T16:26:06Z
2020-07-23T16:44:09Z
2020-07-23T16:44:09Z
null
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/133/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/133/timeline
null
completed
null
null
false
[ "Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\...
https://api.github.com/repos/huggingface/datasets/issues/39
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/39/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/39/comments
https://api.github.com/repos/huggingface/datasets/issues/39/events
https://github.com/huggingface/datasets/pull/39
611,712,135
MDExOlB1bGxSZXF1ZXN0NDEyODIxNTA4
39
[Test] improve slow testing
[]
closed
false
null
0
2020-05-04T08:58:33Z
2020-05-04T08:59:50Z
2020-05-04T08:59:49Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/39/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/39/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/39.diff", "html_url": "https://github.com/huggingface/datasets/pull/39", "merged_at": "2020-05-04T08:59:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/39.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/39" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/660/comments
https://api.github.com/repos/huggingface/datasets/issues/660/events
https://github.com/huggingface/datasets/pull/660
706,324,032
MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0
660
add openwebtext
[]
closed
false
null
3
2020-09-22T12:05:22Z
2020-10-06T09:20:10Z
2020-09-28T09:07:26Z
null
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset building script, I made some changes to the library. 1. Extract large amount of compressed files with multi processing I add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing. 2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip) Because there is no way to 100% detect a file is a zip file (see [this](https://stackoverflow.com/questions/18194688/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'./datasets/downloads/extracted/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027/openwebtext/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last. 3. `MockDownloadManager.extract` Cuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation. **Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days). ### There is something I think we can improve 4. Long time to decompress compressed files Even I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files. ### Info about the source data The source data is an tar.xz file with following structure, files/directory beyond compressed file is what can we get after decompress it. ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` And this the structure of dummy data, same as the original one. ``` dummy_data.zip |__ dummy_data |__ openwebtext |__fake_subset-1_data-dirxz # actually it is a directory | |__ ....txt | |__ ....txt |__ fake_subset-2_data-dirxz |__ ....txt |__ ....txt ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/660/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/660.diff", "html_url": "https://github.com/huggingface/datasets/pull/660", "merged_at": "2020-09-28T09:07:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/660" }
true
[ "BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.", "> BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality te...
https://api.github.com/repos/huggingface/datasets/issues/2672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2672/comments
https://api.github.com/repos/huggingface/datasets/issues/2672/events
https://github.com/huggingface/datasets/pull/2672
947,294,605
MDExOlB1bGxSZXF1ZXN0NjkyMjk2NDQ4
2,672
Fix potential DuplicatedKeysError in LibriSpeech
[]
closed
false
null
0
2021-07-19T06:00:49Z
2021-07-19T06:28:57Z
2021-07-19T06:28:56Z
null
DONE: - Fix unnecessary path join. - Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2672/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2672.diff", "html_url": "https://github.com/huggingface/datasets/pull/2672", "merged_at": "2021-07-19T06:28:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2672.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2672" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3565/comments
https://api.github.com/repos/huggingface/datasets/issues/3565/events
https://github.com/huggingface/datasets/pull/3565
1,099,296,693
PR_kwDODunzps4wzjhH
3,565
Add parameter `preserve_index` to `from_pandas`
[]
closed
false
null
2
2022-01-11T15:26:37Z
2022-01-12T16:11:27Z
2022-01-12T16:11:27Z
null
Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3565/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3565/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3565.diff", "html_url": "https://github.com/huggingface/datasets/pull/3565", "merged_at": "2022-01-12T16:11:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3565.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3565" }
true
[ "> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n![image](https://user-images.githubusercontent.com/20703486/149166681-2f9d1bc4-116a-4f53-ad42...
https://api.github.com/repos/huggingface/datasets/issues/3441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3441/comments
https://api.github.com/repos/huggingface/datasets/issues/3441/events
https://github.com/huggingface/datasets/issues/3441
1,081,571,784
I_kwDODunzps5Ad3nI
3,441
Add QuALITY dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
1
2021-12-15T22:26:19Z
2021-12-28T15:17:05Z
null
null
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3441/timeline
null
null
null
null
false
[ "I'll take this one if no one hasn't yet!" ]
https://api.github.com/repos/huggingface/datasets/issues/2636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2636/comments
https://api.github.com/repos/huggingface/datasets/issues/2636/events
https://github.com/huggingface/datasets/pull/2636
943,044,514
MDExOlB1bGxSZXF1ZXN0Njg4NzEyMTY4
2,636
Streaming for the Pandas loader
[]
closed
false
null
0
2021-07-13T09:18:21Z
2021-07-13T14:37:24Z
2021-07-13T14:37:23Z
null
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2636/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2636/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2636.diff", "html_url": "https://github.com/huggingface/datasets/pull/2636", "merged_at": "2021-07-13T14:37:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2636.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2636" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2335/comments
https://api.github.com/repos/huggingface/datasets/issues/2335/events
https://github.com/huggingface/datasets/issues/2335
881,291,887
MDU6SXNzdWU4ODEyOTE4ODc=
2,335
Index error in Dataset.map
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2021-05-08T20:44:57Z
2021-05-10T13:26:12Z
2021-05-10T13:26:12Z
null
The following code, if executed on master, raises an IndexError (due to overflow): ```python >>> from datasets import * >>> d = load_dataset("bookcorpus", split="train") Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700) 2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll >>> d.map(lambda ex: ex) 0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i])) 0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map new_fingerprint=new_fingerprint, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper out = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single for i, example in enumerate(pbar): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__ format_kwargs=format_kwargs, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table pa_subtable = _query_table(table, key) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table return table.fast_slice(key % table.num_rows, 1) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice i = _interpolation_search(self._offsets, offset) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") IndexError: Invalid query '290162' for size 74004228. ``` Tested on Windows, can run on Linux if needed. EDIT: It seems like for this to happen, the default NumPy dtype has to be np.int32.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2335/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4592/comments
https://api.github.com/repos/huggingface/datasets/issues/4592/events
https://github.com/huggingface/datasets/issues/4592
1,288,029,377
I_kwDODunzps5MxcTB
4,592
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
[]
closed
false
null
3
2022-06-29T00:15:54Z
2022-06-29T10:30:03Z
2022-06-29T07:49:27Z
null
### Link https://huggingface.co/datasets/jalFaizy/detect_chess_pieces ### Description I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) When I run the command `$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs` It gives the following error ``` Using custom data configuration default Traceback (most recent call last): File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module> File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main service.run() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run for j, builder in enumerate(get_builders()): File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders yield builder_cls( File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__ super().__init__(*args, **kwargs) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__ info = self.get_exported_dataset_info() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory dataset_infos_dict = { File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) File "<string>", line 20, in __init__ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__ templates = [ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp> template if isinstance(template, TaskTemplate) else task_template_from_dict(template) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict return template.from_dict(task_template_dict) AttributeError: 'NoneType' object has no attribute 'from_dict' ``` My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4592/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4592/timeline
null
completed
null
null
false
[ "Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/reposit...
https://api.github.com/repos/huggingface/datasets/issues/6027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6027/comments
https://api.github.com/repos/huggingface/datasets/issues/6027/events
https://github.com/huggingface/datasets/pull/6027
1,803,008,486
PR_kwDODunzps5Va4g3
6,027
Delete `task_templates` in `IterableDataset` when they are no longer valid
[]
closed
false
null
3
2023-07-13T13:16:17Z
2023-07-13T14:06:20Z
2023-07-13T13:57:35Z
null
Fix #6025
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6027/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6027/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6027.diff", "html_url": "https://github.com/huggingface/datasets/pull/6027", "merged_at": "2023-07-13T13:57:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6027.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6027" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/1166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1166/comments
https://api.github.com/repos/huggingface/datasets/issues/1166/events
https://github.com/huggingface/datasets/pull/1166
757,721,208
MDExOlB1bGxSZXF1ZXN0NTMzMDQ1NDUy
1,166
Opus montenegrinsubs
[]
closed
false
null
1
2020-12-05T17:00:44Z
2020-12-07T11:02:49Z
2020-12-07T11:02:49Z
null
Opus montenegrinsubs - language pair en-me more info : http://opus.nlpl.eu/MontenegrinSubs.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1166/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1166/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1166.diff", "html_url": "https://github.com/huggingface/datasets/pull/1166", "merged_at": "2020-12-07T11:02:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1166.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1166" }
true
[ "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/3146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3146/comments
https://api.github.com/repos/huggingface/datasets/issues/3146/events
https://github.com/huggingface/datasets/issues/3146
1,033,605,947
I_kwDODunzps49m5M7
3,146
CLI test command throws NonMatchingSplitsSizesError when saving infos
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2021-10-22T13:50:53Z
2021-10-27T08:01:49Z
2021-10-27T08:01:49Z
null
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4... Traceback (most recent call last): File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module> sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')()) File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main service.run() File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run builder.download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare self._download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}] ``` This is due because a previous run generated a wrong `dataset_info.json`. This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3146/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/2479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2479/comments
https://api.github.com/repos/huggingface/datasets/issues/2479/events
https://github.com/huggingface/datasets/pull/2479
918,672,431
MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4
2,479
❌ load_datasets ❌
[]
closed
false
null
0
2021-06-11T12:14:36Z
2021-06-11T14:46:25Z
2021-06-11T14:46:25Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2479/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2479.diff", "html_url": "https://github.com/huggingface/datasets/pull/2479", "merged_at": "2021-06-11T14:46:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/2479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2479" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1445/comments
https://api.github.com/repos/huggingface/datasets/issues/1445/events
https://github.com/huggingface/datasets/pull/1445
761,057,851
MDExOlB1bGxSZXF1ZXN0NTM1NzgzMzY2
1,445
Added dataset clickbait_news_bg
[]
closed
false
null
2
2020-12-10T09:17:28Z
2020-12-15T07:45:19Z
2020-12-15T07:45:19Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1445/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1445/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1445.diff", "html_url": "https://github.com/huggingface/datasets/pull/1445", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1445.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1445" }
true
[ "Looks like this PR includes changes about many other files than the ones for clickbait_news_bg\r\n\r\nCan you create another branch and another PR please ?", "I created a new branch with the dataset code and submitted a new PR for it: https://github.com/huggingface/datasets/pull/1568" ]
https://api.github.com/repos/huggingface/datasets/issues/1131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1131/comments
https://api.github.com/repos/huggingface/datasets/issues/1131/events
https://github.com/huggingface/datasets/pull/1131
757,278,341
MDExOlB1bGxSZXF1ZXN0NTMyNjgxMTI0
1,131
Adding XQUAD-R Dataset
[]
closed
false
null
0
2020-12-04T17:35:43Z
2020-12-04T18:27:22Z
2020-12-04T18:27:22Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1131/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1131/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1131.diff", "html_url": "https://github.com/huggingface/datasets/pull/1131", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1131.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1131" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5249/comments
https://api.github.com/repos/huggingface/datasets/issues/5249/events
https://github.com/huggingface/datasets/issues/5249
1,451,692,247
I_kwDODunzps5WhxDX
5,249
Protect the main branch from inadvertent direct pushes
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
open
false
null
1
2022-11-16T14:19:03Z
2023-07-21T14:34:44Z
null
null
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protection rules > main > Edit - [x] Check: Do not allow bypassing the above settings - The above settings will apply to administrators and custom roles with the "bypass branch protections" permission. - [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked] - Before, we could exceptionally merge a non-approved PR, using Administrator bypass - Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed - Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval - [ ] #5250 - So that direct pushes to main branch are no longer necessary
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5249/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5249/timeline
null
null
null
null
false
[ "It seems all the tasks have been addressed, meaning this issue can be closed, no?" ]
https://api.github.com/repos/huggingface/datasets/issues/4162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4162/comments
https://api.github.com/repos/huggingface/datasets/issues/4162/events
https://github.com/huggingface/datasets/pull/4162
1,203,421,909
PR_kwDODunzps42LtGO
4,162
Add Conceptual 12M
[]
closed
false
null
2
2022-04-13T14:57:23Z
2022-04-15T08:13:01Z
2022-04-15T08:06:25Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4162/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4162/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4162.diff", "html_url": "https://github.com/huggingface/datasets/pull/4162", "merged_at": "2022-04-15T08:06:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4162.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4162" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip" ]
https://api.github.com/repos/huggingface/datasets/issues/4384
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4384/comments
https://api.github.com/repos/huggingface/datasets/issues/4384/events
https://github.com/huggingface/datasets/pull/4384
1,243,919,748
PR_kwDODunzps44OwFr
4,384
Refactor download
[]
closed
false
null
4
2022-05-21T08:49:24Z
2022-05-25T10:52:02Z
2022-05-25T10:43:43Z
null
This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments: - understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities - abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower - architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements. As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860 - After an extension, a circular import is found - Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction: ``` ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'. tests/conftest.py:12: in <module> import datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module> from .arrow_dataset import Dataset, concatenate_datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module> from . import config ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module> from .utils.logging import get_logger ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module> from .download_manager import DownloadConfig, DownloadManager, DownloadMode ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module> from .py_utils import NestedDataStructure, map_nested, size_str ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module> if config.DILL_VERSION < version.parse("0.3.5"): E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION' ``` Imports: - datasets - Dataset: lower level than datasets - config: lower level than Dataset - logger: lower level than config - DownloadManager: !!! HIGHER level of abstraction than logger!! Why when importing logger we require importing DownloadManager?!? - Logically, it does not make sense - This is due to an error in the design/architecture of our library: - To import the logger, we need to import it from `.utils.logging` - To import `.utils.logging` we need to import `.utils` - The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`! When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4384/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4384.diff", "html_url": "https://github.com/huggingface/datasets/pull/4384", "merged_at": "2022-05-25T10:43:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/4384.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4384" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?", "The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might ...
https://api.github.com/repos/huggingface/datasets/issues/1456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1456/comments
https://api.github.com/repos/huggingface/datasets/issues/1456/events
https://github.com/huggingface/datasets/pull/1456
761,231,296
MDExOlB1bGxSZXF1ZXN0NTM1OTI4MTc2
1,456
Add CC100 Dataset
[]
closed
false
null
0
2020-12-10T13:14:37Z
2020-12-14T10:20:09Z
2020-12-14T10:20:08Z
null
Closes #773
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1456/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1456.diff", "html_url": "https://github.com/huggingface/datasets/pull/1456", "merged_at": "2020-12-14T10:20:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1456" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/6060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6060/comments
https://api.github.com/repos/huggingface/datasets/issues/6060/events
https://github.com/huggingface/datasets/issues/6060
1,816,614,120
I_kwDODunzps5sR1To
6,060
Dataset.map() execute twice when in PyTorch DDP mode
[]
open
false
null
3
2023-07-22T05:06:43Z
2023-07-24T19:29:55Z
null
null
### Describe the bug I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same. And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither. I have tried to use `rank` and `local_rank` to check, they all didn't make sense. ### Steps to reproduce the bug use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run This is my code: ```python if args.distributed and world_size > 1: if args.local_rank > 0: print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True) torch.distributed.barrier() print("Mapping dataset") dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys") dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift") dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys") if args.local_rank == 0: print("Mapping finished, loading results from main process") torch.distributed.barrier() ``` ### Expected behavior Only the main process will execute `map`, while the sub process will load cache from disk. ### Environment info server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090 - `python==3.9.16` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `22.04.1-Ubuntu` server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090 - `python==3.9.0` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `Ubuntu 20.04`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6060/timeline
null
null
null
null
false
[ "Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or...
https://api.github.com/repos/huggingface/datasets/issues/1929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1929/comments
https://api.github.com/repos/huggingface/datasets/issues/1929/events
https://github.com/huggingface/datasets/pull/1929
813,929,669
MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4
1,929
Improve typing and style and fix some inconsistencies
[]
closed
false
null
2
2021-02-22T22:47:41Z
2021-02-24T16:16:14Z
2021-02-24T14:03:54Z
null
This PR: * improves typing (mostly more consistent use of `typing.Optional`) * `DatasetDict.cleanup_cache_files` now correctly returns a dict * replaces `dict()` with the corresponding literal * uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1929/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1929/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1929.diff", "html_url": "https://github.com/huggingface/datasets/pull/1929", "merged_at": "2021-02-24T14:03:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1929.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1929" }
true
[ "@lhoestq Thanks for the quick review.", "I merged master to this branch to re-run the CI before merging :)" ]
https://api.github.com/repos/huggingface/datasets/issues/3412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3412/comments
https://api.github.com/repos/huggingface/datasets/issues/3412/events
https://github.com/huggingface/datasets/pull/3412
1,075,846,368
PR_kwDODunzps4voLs4
3,412
Fix flaky test again for s3 serialization
[]
closed
false
null
0
2021-12-09T17:54:41Z
2021-12-09T18:00:52Z
2021-12-09T18:00:52Z
null
Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985))
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3412/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3412/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3412.diff", "html_url": "https://github.com/huggingface/datasets/pull/3412", "merged_at": "2021-12-09T18:00:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/3412.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3412" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2496/comments
https://api.github.com/repos/huggingface/datasets/issues/2496/events
https://github.com/huggingface/datasets/issues/2496
920,216,314
MDU6SXNzdWU5MjAyMTYzMTQ=
2,496
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
[]
closed
false
null
0
2021-06-14T09:20:26Z
2021-06-21T15:05:03Z
2021-06-21T15:05:03Z
null
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file is also used to get the fingerprint To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2496/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3523/comments
https://api.github.com/repos/huggingface/datasets/issues/3523/events
https://github.com/huggingface/datasets/pull/3523
1,093,819,227
PR_kwDODunzps4wiJc2
3,523
Added links to licensing and PII message in vctk dataset
[]
closed
false
null
0
2022-01-04T22:56:58Z
2022-01-06T19:33:50Z
2022-01-06T19:33:50Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3523/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3523.diff", "html_url": "https://github.com/huggingface/datasets/pull/3523", "merged_at": "2022-01-06T19:33:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/3523.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3523" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/686/comments
https://api.github.com/repos/huggingface/datasets/issues/686/events
https://github.com/huggingface/datasets/issues/686
711,385,739
MDU6SXNzdWU3MTEzODU3Mzk=
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
[]
closed
false
null
2
2020-09-29T19:21:52Z
2021-01-08T18:29:26Z
2021-01-08T18:29:26Z
null
Might be worth updating to https://huggingface.co/datasets/viewer/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
null
completed
null
null
false
[ "Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)", "This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!" ]
https://api.github.com/repos/huggingface/datasets/issues/2267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2267/comments
https://api.github.com/repos/huggingface/datasets/issues/2267/events
https://github.com/huggingface/datasets/issues/2267
868,291,129
MDU6SXNzdWU4NjgyOTExMjk=
2,267
DatasetDict save load Failing test in 1.6 not in 1.5
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
6
2021-04-27T00:03:25Z
2021-05-28T15:27:34Z
null
null
## Describe the bug We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema. Downgrading to `>1.6` -- fixes the problem. ## Steps to reproduce the bug ```python ### Load a dataset dict from jsonl path = '/test/foo' ds_dict.save_to_disk(path) ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6 ``` ## Expected results Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk. ## Actual results ``` # Infer features if None inferred_features = Features.from_arrow_schema(arrow_table.schema) if self.info.features is None: self.info.features = inferred_features # Infer fingerprint if None if self._fingerprint is None: self._fingerprint = generate_fingerprint(self) # Sanity checks assert self.features is not None, "Features can't be None in a Dataset object" assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" if self.info.features.type != inferred_features.type: > raise ValueError( "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( self.info.features, self.info.features.type, inferred_features, inferred_features.type ) ) E ValueError: External features info don't match the dataset: E Got E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>> E E but expected something like E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>> ../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError ``` ## Versions - Datasets: 1.6.1 - Python: 3.8.5 (default, Jan 26 2021, 10:01:04) [Clang 12.0.0 (clang-1200.0.32.2)] - Platform: macOS-10.15.7-x86_64-i386-64bit ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2267/timeline
null
null
null
null
false
[ "Thanks for reporting ! We're looking into it", "I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?", "Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load...
https://api.github.com/repos/huggingface/datasets/issues/1374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1374/comments
https://api.github.com/repos/huggingface/datasets/issues/1374/events
https://github.com/huggingface/datasets/pull/1374
760,288,291
MDExOlB1bGxSZXF1ZXN0NTM1MTQ1Mzgw
1,374
Add OPUS Tilde Model Dataset
[]
closed
false
null
1
2020-12-09T12:29:23Z
2020-12-10T16:11:29Z
2020-12-10T16:11:28Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1374/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1374/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1374.diff", "html_url": "https://github.com/huggingface/datasets/pull/1374", "merged_at": "2020-12-10T16:11:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1374.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1374" }
true
[ "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/4154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4154/comments
https://api.github.com/repos/huggingface/datasets/issues/4154/events
https://github.com/huggingface/datasets/pull/4154
1,202,145,721
PR_kwDODunzps42Hh14
4,154
Generate tasks.json taxonomy from `huggingface_hub`
[]
closed
false
null
7
2022-04-12T17:12:46Z
2022-04-14T10:32:32Z
2022-04-14T10:26:13Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4154/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4154/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4154.diff", "html_url": "https://github.com/huggingface/datasets/pull/4154", "merged_at": "2022-04-14T10:26:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/4154.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4154" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ok recomputed the json file, this should be ready to review now! @lhoestq ", "Note: the generated JSON from `hf/hub-docs` can be found in the output of a GitHub Action run on that repo, for instance in https://github.com/huggingfac...
https://api.github.com/repos/huggingface/datasets/issues/5354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5354/comments
https://api.github.com/repos/huggingface/datasets/issues/5354/events
https://github.com/huggingface/datasets/issues/5354
1,492,174,125
I_kwDODunzps5Y8MUt
5,354
Consider using "Sequence" instead of "List"
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
open
false
null
7
2022-12-12T15:39:45Z
2023-07-26T16:25:51Z
null
null
### Feature request Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below. **How to reproduce** ```py list_of_filenames = ["foo.parquet", "bar.parquet"] ds = Dataset.from_parquet(list_of_filenames) ``` **Expected mypy output:** ``` Success: no issues found ``` **Actual mypy output:** ```py test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type] test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance test.py:19: note: Consider using "Sequence" instead, which is covariant ``` **Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5354/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5354/timeline
null
null
null
null
false
[ "Hi! Linking a comment to provide more info on the issue: https://stackoverflow.com/a/39458225. This means we should replace all (most of) the occurrences of `List` with `Sequence` in function signatures.\r\n\r\n@tranhd95 Would you be interested in submitting a PR?", "Hi all! I tried to reproduce this issue and d...
https://api.github.com/repos/huggingface/datasets/issues/581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/581/comments
https://api.github.com/repos/huggingface/datasets/issues/581/events
https://github.com/huggingface/datasets/issues/581
695,120,517
MDU6SXNzdWU2OTUxMjA1MTc=
581
Better error message when input file does not exist
[]
closed
false
null
0
2020-09-07T13:47:59Z
2020-09-09T09:00:07Z
2020-09-09T09:00:07Z
null
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y. ```python dataset = load_dataset("text", data_files=[]) ``` Example error trace. ``` Using custom data configuration default Downloading and preparing dataset text/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\Users\bramv\.cache\huggingface\datasets\text\default-d18f9b6611eb8e16\0.0.0\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b... Traceback (most recent call last): File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 424, in incomplete_dir yield tmp_dir File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare self._download_and_prepare( File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 537, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 813, in _prepare_split num_examples, num_bytes = writer.finalize() File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\arrow_writer.py", line 217, in finalize self.pa_writer.close() AttributeError: 'NoneType' object has no attribute 'close' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/dev/python/dutch-simplification/main.py", line 7, in <module> dataset = load_dataset("text", data_files=files) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare self._save_info() File "c:\users\bramv\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 430, in incomplete_dir shutil.rmtree(tmp_dir) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 737, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 615, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 613, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\text-train.arrow' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/581/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/658/comments
https://api.github.com/repos/huggingface/datasets/issues/658/events
https://github.com/huggingface/datasets/pull/658
706,206,247
MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0
658
Fix squad metric's Features
[]
closed
false
null
1
2020-09-22T09:09:52Z
2020-09-29T15:58:30Z
2020-09-29T15:58:30Z
null
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/658/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/658.diff", "html_url": "https://github.com/huggingface/datasets/pull/658", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/658" }
true
[ "Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks" ]
https://api.github.com/repos/huggingface/datasets/issues/946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/946/comments
https://api.github.com/repos/huggingface/datasets/issues/946/events
https://github.com/huggingface/datasets/pull/946
754,278,632
MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw
946
add PEC dataset
[]
closed
false
null
3
2020-12-01T10:41:41Z
2020-12-03T02:47:14Z
2020-12-03T02:47:14Z
null
A persona-based empathetic conversation dataset published at EMNLP 2020.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/946/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/946/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/946.diff", "html_url": "https://github.com/huggingface/datasets/pull/946", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/946.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/946" }
true
[ "The checks failed again even if I didn't make any changes.", "you just need to rebase from master to fix the CI :)", "Sorry for the mess, I'm confused by the rebase and thus created a new branch." ]
https://api.github.com/repos/huggingface/datasets/issues/3397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3397/comments
https://api.github.com/repos/huggingface/datasets/issues/3397/events
https://github.com/huggingface/datasets/pull/3397
1,073,502,444
PR_kwDODunzps4vgh1U
3,397
add BNL newspapers
[]
closed
false
null
9
2021-12-07T15:43:21Z
2022-01-17T18:35:34Z
2022-01-17T18:35:34Z
null
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate pull request to try and make this more complete at a later date. I had to manually add the `dummy_data` but I believe I've done this correctly (the tests pass locally).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3397/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3397.diff", "html_url": "https://github.com/huggingface/datasets/pull/3397", "merged_at": "2022-01-17T18:35:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/3397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3397" }
true
[ "\r\n> Also, maybe calling the dataset as \"bnl_historical_newspapers\" and setting \"processed\" as one configuration name?\r\n\r\nThis sounds like a good idea but my only question around this is how easy it would be to use the same approach for processing the other newspaper collections [https://data.bnl.lu/data/...
https://api.github.com/repos/huggingface/datasets/issues/458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/458/comments
https://api.github.com/repos/huggingface/datasets/issues/458/events
https://github.com/huggingface/datasets/pull/458
668,972,666
MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2
458
Install CoVal metric from github
[]
closed
false
null
0
2020-07-30T16:59:25Z
2020-07-31T13:56:33Z
2020-07-31T13:56:33Z
null
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional arguments.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/458/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/458.diff", "html_url": "https://github.com/huggingface/datasets/pull/458", "merged_at": "2020-07-31T13:56:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/458" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5929/comments
https://api.github.com/repos/huggingface/datasets/issues/5929/events
https://github.com/huggingface/datasets/issues/5929
1,744,478,456
I_kwDODunzps5n-qD4
5,929
Importing PyTorch reduces multiprocessing performance for map
[]
closed
false
null
2
2023-06-06T19:42:25Z
2023-06-16T13:09:12Z
2023-06-16T13:09:12Z
null
### Describe the bug I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported. ### Steps to reproduce the bug I created two example scripts to reproduce this behavior: ``` import datasets datasets.disable_caching() from datasets import Dataset import time PROC=32 if __name__ == "__main__": dataset = [True] * 10000000 dataset = Dataset.from_dict({'train': dataset}) start = time.time() dataset.map(lambda x: x, num_proc=PROC) end = time.time() print(end - start) ``` Takes around 4 seconds on my machine. While the same code, but with an `import torch`: ``` import datasets datasets.disable_caching() from datasets import Dataset import time import torch PROC=32 if __name__ == "__main__": dataset = [True] * 10000000 dataset = Dataset.from_dict({'train': dataset}) start = time.time() dataset.map(lambda x: x, num_proc=PROC) end = time.time() print(end - start) ``` takes around 22 seconds. ### Expected behavior I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2 - torch: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5929/timeline
null
completed
null
null
false
[ "Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.", "Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigati...
https://api.github.com/repos/huggingface/datasets/issues/5109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5109/comments
https://api.github.com/repos/huggingface/datasets/issues/5109/events
https://github.com/huggingface/datasets/issues/5109
1,407,434,706
I_kwDODunzps5T47_S
5,109
Map caching not working for some class methods
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-10-13T09:12:58Z
2022-10-17T10:38:45Z
2022-10-17T10:38:45Z
null
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run. ## Steps to reproduce the bug ```python from datasets import load_dataset from transformers import AutoConfig, AutoModel, AutoTokenizer dataset = load_dataset("ethos", "binary") BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2" class Object: def __init__(self): config = AutoConfig.from_pretrained(BASE_MODELNAME) self.bert = AutoModel.from_config(config=config, add_pooling_layer=False) self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME) def tokenize(self, examples): tokenized_texts = self.tok( examples["text"], padding="max_length", truncation=True, max_length=256, ) return tokenized_texts instance = Object() result = dict() for phase in ["train"]: result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2) ``` ## Expected results Load cache instead of recompute result. ## Actual results Result recomputed from scratch at each run. The cache works fine when deleting `bert` attribute. ## Environment info - `datasets` version: 2.5.3.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5109/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5109/timeline
null
completed
null
null
false
[ "The hash used for caching is computed by pickling recursively the function passed to `map`. Maybe some objects don't have the same hash across sessions. In particular you can check the hash of your model using\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nobj = AutoModel.from_config(config=config, ad...
https://api.github.com/repos/huggingface/datasets/issues/3247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3247/comments
https://api.github.com/repos/huggingface/datasets/issues/3247/events
https://github.com/huggingface/datasets/issues/3247
1,049,699,088
I_kwDODunzps4-kSMQ
3,247
Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2021-11-10T11:17:59Z
2022-04-10T14:05:57Z
2022-04-10T14:05:57Z
null
## Describe the bug When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work. Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works ## Steps to reproduce the bug ```python load_dataset("json", data_files="test.json") ``` test.json ~25MB ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ... ``` working.json ~160bytes ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ``` ## Expected results It should load the dataset from the json file without error. ## Actual results It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` ``` Traceback (most recent call last): File "/Users/m/workspace/xxx/project/main.py", line 60, in <module> dataset = load_dataset("json", data_files="result.json") File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset builder_instance.download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct ``` ## Environment info - `datasets` version: 1.14.0 - Platform: macOS-12.0.1-arm64-arm-64bit - Python version: 3.9.7 - PyArrow version: 6.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3247/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3247/timeline
null
completed
null
null
false
[ "Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening a...
https://api.github.com/repos/huggingface/datasets/issues/1533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1533/comments
https://api.github.com/repos/huggingface/datasets/issues/1533/events
https://github.com/huggingface/datasets/pull/1533
764,835,913
MDExOlB1bGxSZXF1ZXN0NTM4NzE4MDAz
1,533
add id_panl_bppt, a parallel corpus for en-id
[]
closed
false
null
2
2020-12-13T03:11:27Z
2020-12-21T10:40:36Z
2020-12-21T10:40:36Z
null
Parallel Text Corpora for English - Indonesian
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1533/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1533.diff", "html_url": "https://github.com/huggingface/datasets/pull/1533", "merged_at": "2020-12-21T10:40:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/1533.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1533" }
true
[ "Hi @lhoestq, thanks for the review. I will have a look and update it accordingly.", "Strange error message :-)\r\n\r\n```\r\n> tf_context = tf.python.context.context() # eager mode context\r\nE AttributeError: module 'tensorflow' has no attribute 'python'\r\n```\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/3868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3868/comments
https://api.github.com/repos/huggingface/datasets/issues/3868/events
https://github.com/huggingface/datasets/pull/3868
1,162,914,114
PR_kwDODunzps40HnWA
3,868
Ignore duplicate keys if `ignore_verifications=True`
[]
closed
false
null
2
2022-03-08T17:14:56Z
2022-03-09T13:50:45Z
2022-03-09T13:50:44Z
null
Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3868/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3868.diff", "html_url": "https://github.com/huggingface/datasets/pull/3868", "merged_at": "2022-03-09T13:50:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3868" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3868). All of your documentation changes will be reflected on that endpoint.", "Cool thanks ! Could you add a test please ?" ]
https://api.github.com/repos/huggingface/datasets/issues/1055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1055/comments
https://api.github.com/repos/huggingface/datasets/issues/1055/events
https://github.com/huggingface/datasets/pull/1055
756,298,372
MDExOlB1bGxSZXF1ZXN0NTMxODY1NjM4
1,055
Add hebrew-sentiment
[]
closed
false
null
4
2020-12-03T15:24:31Z
2022-02-21T15:26:05Z
2020-12-04T11:24:16Z
null
hebrew-sentiment dataset is ready! (including tests, tags etc)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1055/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1055/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1055.diff", "html_url": "https://github.com/huggingface/datasets/pull/1055", "merged_at": "2020-12-04T11:24:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/1055.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1055" }
true
[ "@elronbandel it looks like something went wrong with the renaming, as the old files are still in the PR. Can you `git rm datasets/hebrew-sentiment` ?", "merging since the CI is fixed on master", "This is the old version of the data.\r\nHere is the fixed version.\r\nhttps://github.com/OnlpLab/Hebrew-Sentiment-D...
https://api.github.com/repos/huggingface/datasets/issues/5488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5488/comments
https://api.github.com/repos/huggingface/datasets/issues/5488/events
https://github.com/huggingface/datasets/issues/5488
1,565,025,262
I_kwDODunzps5dSGPu
5,488
Error loading MP3 files from CommonVoice
[]
closed
false
null
4
2023-01-31T21:25:33Z
2023-03-02T16:25:14Z
2023-03-02T16:25:13Z
null
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file) 310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed) --> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file) 312 except RuntimeError: ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file) 351 --> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 353 if self.sampling_rate and self.sampling_rate != sampling_rate: ~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 204 """ --> 205 with soundfile.SoundFile(filepath, "r") as file_: 206 if file_.format != "WAV" or normalize: ~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 654 format, subtype, endian) --> 655 self._file = self._open(file, mode_int, closefd) 656 if set(mode).issuperset('r+') and self.seekable(): ~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd) 1212 err = _snd.sf_error(file_ptr) -> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) 1214 if mode_int == _snd.SFM_WRITE: LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format. ``` I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889). ### Steps to reproduce the bug ```python dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train") dataset[0] ``` ### Expected behavior Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError` ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5488/timeline
null
completed
null
null
false
[ "Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the pro...
https://api.github.com/repos/huggingface/datasets/issues/2368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2368/comments
https://api.github.com/repos/huggingface/datasets/issues/2368/events
https://github.com/huggingface/datasets/pull/2368
893,411,076
MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0
2,368
Allow "other-X" in licenses
[]
closed
false
null
0
2021-05-17T14:47:54Z
2021-05-17T16:36:27Z
2021-05-17T16:36:27Z
null
This PR allows "other-X" licenses during metadata validation. @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2368/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2368.diff", "html_url": "https://github.com/huggingface/datasets/pull/2368", "merged_at": "2021-05-17T16:36:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/2368.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2368" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1433/comments
https://api.github.com/repos/huggingface/datasets/issues/1433/events
https://github.com/huggingface/datasets/pull/1433
760,813,539
MDExOlB1bGxSZXF1ZXN0NTM1NTgxNzE3
1,433
Adding the ASSIN 2 dataset
[]
closed
false
null
0
2020-12-10T01:57:02Z
2020-12-11T14:32:56Z
2020-12-11T14:32:56Z
null
Adding the ASSIN 2 dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1433/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1433/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1433.diff", "html_url": "https://github.com/huggingface/datasets/pull/1433", "merged_at": "2020-12-11T14:32:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1433.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1433" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2497/comments
https://api.github.com/repos/huggingface/datasets/issues/2497/events
https://github.com/huggingface/datasets/pull/2497
920,250,382
MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3
2,497
Use default cast for sliced list arrays if pyarrow >= 4
[]
closed
false
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
2
2021-06-14T10:02:47Z
2021-06-15T18:06:18Z
2021-06-14T14:24:37Z
null
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2497/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2497.diff", "html_url": "https://github.com/huggingface/datasets/pull/2497", "merged_at": "2021-06-14T14:24:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2497.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2497" }
true
[ "I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?", "@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 😉 " ]
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
[]
closed
false
null
1
2021-03-12T09:09:39Z
2021-03-15T08:45:02Z
2021-03-15T08:44:44Z
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
completed
null
null
false
[ "Solved!" ]
https://api.github.com/repos/huggingface/datasets/issues/4466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4466/comments
https://api.github.com/repos/huggingface/datasets/issues/4466/events
https://github.com/huggingface/datasets/pull/4466
1,266,159,920
PR_kwDODunzps45ZLsd
4,466
Optimize contiguous shard and select
[]
closed
false
null
3
2022-06-09T13:45:39Z
2022-06-14T16:04:30Z
2022-06-14T15:54:45Z
null
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations Here is an example of speed-up: ```python >>> import io >>> import numpy as np >>> from datasets import Dataset >>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)}) >>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))` >>> buf = io.BytesIO() >>> %time dd.to_json(buf) Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s] CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms Wall time: 266 ms ``` while previously it was ```python Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s] CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s Wall time: 3.4 s ``` In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON. ## Implementation details I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities: - if the indices is of type `range`, it checks that start >= 0 and step = 1 - otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping. Having to iterate over the indices doesn't cause performance issues IMO because: - either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping - or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4466/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4466.diff", "html_url": "https://github.com/huggingface/datasets/pull/4466", "merged_at": "2022-06-14T15:54:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/4466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4466" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\"...
https://api.github.com/repos/huggingface/datasets/issues/3997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3997/comments
https://api.github.com/repos/huggingface/datasets/issues/3997/events
https://github.com/huggingface/datasets/pull/3997
1,178,566,568
PR_kwDODunzps4058xr
3,997
Sync Features dictionaries
[]
closed
false
null
1
2022-03-23T19:23:51Z
2022-04-13T15:52:27Z
2022-04-13T15:46:19Z
null
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons: * it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py` * is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it * IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3997/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3997/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3997.diff", "html_url": "https://github.com/huggingface/datasets/pull/3997", "merged_at": "2022-04-13T15:46:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3997.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3997" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2604/comments
https://api.github.com/repos/huggingface/datasets/issues/2604/events
https://github.com/huggingface/datasets/issues/2604
938,602,237
MDU6SXNzdWU5Mzg2MDIyMzc=
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
14
2021-07-07T07:56:16Z
2021-07-19T09:08:18Z
2021-07-19T09:08:18Z
null
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter. I can maybe tackle this one in the JSON script unless you want a more general solution.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2604/timeline
null
completed
null
null
false
[ "Hi !\r\nIf we want something more general, we could either\r\n1. delete the extracted files after the arrow data generation automatically, or \r\n2. delete each extracted file during the arrow generation right after it has been closed.\r\n\r\nSolution 2 is better to save disk space during the arrow generation. Is ...
https://api.github.com/repos/huggingface/datasets/issues/5002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5002/comments
https://api.github.com/repos/huggingface/datasets/issues/5002/events
https://github.com/huggingface/datasets/issues/5002
1,380,589,402
I_kwDODunzps5SSh9a
5,002
Dataset Viewer issue for loubnabnl/humaneval-x
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
2
2022-09-21T09:06:17Z
2022-09-21T11:49:49Z
2022-09-21T11:49:49Z
null
### Link https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/ ### Description The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine) ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5002/timeline
null
completed
null
null
false
[ "It's a bug! Thanks for reporting, I'm looking at it", "Fixed." ]
https://api.github.com/repos/huggingface/datasets/issues/5513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5513/comments
https://api.github.com/repos/huggingface/datasets/issues/5513/events
https://github.com/huggingface/datasets/issues/5513
1,576,300,803
I_kwDODunzps5d9HED
5,513
Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
[]
closed
false
null
4
2023-02-08T15:13:46Z
2023-07-24T16:02:18Z
2023-07-24T14:27:59Z
null
Hi @mariosasko, @lhoestq, or whoever reads this! :) After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released? Just wanted to get your input, and if applicable, tackle this issue myself! Thanks 🤗
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5513/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5513/timeline
null
completed
null
null
false
[ "Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience.", "Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't...
https://api.github.com/repos/huggingface/datasets/issues/6046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6046/comments
https://api.github.com/repos/huggingface/datasets/issues/6046/events
https://github.com/huggingface/datasets/issues/6046
1,808,154,414
I_kwDODunzps5rxj8u
6,046
Support proxy and user-agent in fsspec calls
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": fals...
open
false
null
0
2023-07-17T16:39:26Z
2023-07-17T16:40:37Z
null
null
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent. Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub. This can be implemented in `_prepare_single_hop_path_and_storage_options`. Though ideally the `HfFileSystem` could support passing at least the proxies
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6046/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1495/comments
https://api.github.com/repos/huggingface/datasets/issues/1495/events
https://github.com/huggingface/datasets/pull/1495
763,025,562
MDExOlB1bGxSZXF1ZXN0NTM3NTE2ODE4
1,495
Opus DGT added
[]
closed
false
null
1
2020-12-11T23:05:09Z
2020-12-17T14:38:41Z
2020-12-17T14:38:41Z
null
Dataset : http://opus.nlpl.eu/DGT.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1495/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1495/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1495.diff", "html_url": "https://github.com/huggingface/datasets/pull/1495", "merged_at": "2020-12-17T14:38:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/1495.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1495" }
true
[ "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/3911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3911/comments
https://api.github.com/repos/huggingface/datasets/issues/3911/events
https://github.com/huggingface/datasets/pull/3911
1,168,652,374
PR_kwDODunzps40aQHz
3,911
Create README.md for CER metric
[]
closed
false
null
1
2022-03-14T16:54:51Z
2022-03-17T17:49:40Z
2022-03-17T17:45:54Z
null
Initial proposal for a CER metric card cc @patrickvonplaten - wdyt this time around? :smile:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3911/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3911/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3911.diff", "html_url": "https://github.com/huggingface/datasets/pull/3911", "merged_at": "2022-03-17T17:45:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3911.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3911" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/210/comments
https://api.github.com/repos/huggingface/datasets/issues/210/events
https://github.com/huggingface/datasets/pull/210
626,504,243
MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz
210
fix xnli metric kwargs description
[]
closed
false
null
0
2020-05-28T13:21:44Z
2020-05-28T13:22:11Z
2020-05-28T13:22:10Z
null
The text was wrong as noticed in #202
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/210/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/210/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/210.diff", "html_url": "https://github.com/huggingface/datasets/pull/210", "merged_at": "2020-05-28T13:22:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/210.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/210" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2213/comments
https://api.github.com/repos/huggingface/datasets/issues/2213/events
https://github.com/huggingface/datasets/pull/2213
856,025,320
MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2
2,213
Fix lc_quad download checksum
[]
closed
false
null
0
2021-04-12T14:16:59Z
2021-04-14T22:04:54Z
2021-04-14T13:42:25Z
null
Fixes #2211
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2213/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2213.diff", "html_url": "https://github.com/huggingface/datasets/pull/2213", "merged_at": "2021-04-14T13:42:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2213.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2213" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5405/comments
https://api.github.com/repos/huggingface/datasets/issues/5405/events
https://github.com/huggingface/datasets/issues/5405
1,517,879,386
I_kwDODunzps5aeQBa
5,405
size_in_bytes the same for all splits
[]
open
false
null
1
2023-01-03T20:25:48Z
2023-01-04T09:22:59Z
null
null
### Describe the bug Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example: ``` >>> from datasets import load_dataset >>> x = load_dataset("glue", "wnli") Found cached dataset glue (/Users/breakend/.cache/huggingface/datasets/glue/wnli/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1097.70it/s] >>> x["train"].size_in_bytes 186159 >>> x["validation"].size_in_bytes 186159 >>> x["test"].size_in_bytes 186159 >>> ``` ### Steps to reproduce the bug ``` >>> from datasets import load_dataset >>> x = load_dataset("glue", "wnli") >>> x["train"].size_in_bytes 186159 >>> x["validation"].size_in_bytes 186159 >>> x["test"].size_in_bytes 186159 ``` ### Expected behavior The expected behavior is that it should return the separate sizes for all splits. ### Environment info - `datasets` version: 2.7.1 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5405/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5405/timeline
null
null
null
null
false
[ "Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of th...
https://api.github.com/repos/huggingface/datasets/issues/416
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/416/comments
https://api.github.com/repos/huggingface/datasets/issues/416/events
https://github.com/huggingface/datasets/pull/416
661,635,393
MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4
416
Fix xtreme panx directory
[]
closed
false
null
1
2020-07-20T10:09:17Z
2020-07-21T08:15:46Z
2020-07-21T08:15:44Z
null
Fix #412
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/416/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/416/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/416.diff", "html_url": "https://github.com/huggingface/datasets/pull/416", "merged_at": "2020-07-21T08:15:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/416.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/416" }
true
[ "great, I think I did not download the data the way you do, but yours is more reasonable." ]
https://api.github.com/repos/huggingface/datasets/issues/2701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2701/comments
https://api.github.com/repos/huggingface/datasets/issues/2701/events
https://github.com/huggingface/datasets/pull/2701
950,422,403
MDExOlB1bGxSZXF1ZXN0Njk0OTcxMzM3
2,701
Fix download_mode docstrings
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
0
2021-07-22T08:30:25Z
2021-07-22T09:33:31Z
2021-07-22T09:33:31Z
null
Fix `download_mode` docstrings.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2701/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2701.diff", "html_url": "https://github.com/huggingface/datasets/pull/2701", "merged_at": "2021-07-22T09:33:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/2701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2701" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5397/comments
https://api.github.com/repos/huggingface/datasets/issues/5397/events
https://github.com/huggingface/datasets/pull/5397
1,514,412,246
PR_kwDODunzps5GYirs
5,397
Unpin pydantic test dependency
[]
closed
false
null
2
2022-12-30T10:22:09Z
2022-12-30T10:53:11Z
2022-12-30T10:43:40Z
null
Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/ See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807 ``` v1.10.3 has been yanked. ``` in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367810049 ``` On behalf of spacy-related packages: would it be possible for you to temporarily yank v1.10.3? To address this and be compatible with v1.10.4, we'd have to release new versions of a whole series of packages and nearly everyone (including me) is currently on vacation. Even if v1.10.4 is released with a fix, pip would still back off to v1.10.3 for spacy, etc. because of its current pins for typing_extensions. If it could instead back off to v1.10.2, we'd have a bit more breathing room to make the updates on our end. ``` Close #5398.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5397/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5397.diff", "html_url": "https://github.com/huggingface/datasets/pull/5397", "merged_at": "2022-12-30T10:43:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5397" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3970/comments
https://api.github.com/repos/huggingface/datasets/issues/3970/events
https://github.com/huggingface/datasets/pull/3970
1,174,327,367
PR_kwDODunzps40sSfx
3,970
Apply index-filters on scores in get_nearest_examples and get_nearest…
[]
closed
false
null
0
2022-03-19T18:32:31Z
2022-03-19T18:38:12Z
2022-03-19T18:38:12Z
null
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3970/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3970/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3970.diff", "html_url": "https://github.com/huggingface/datasets/pull/3970", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3970.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3970" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5206/comments
https://api.github.com/repos/huggingface/datasets/issues/5206/events
https://github.com/huggingface/datasets/issues/5206
1,437,223,894
I_kwDODunzps5VqkvW
5,206
Use logging instead of printing to console
[]
closed
false
null
1
2022-11-05T23:48:02Z
2022-11-06T00:06:00Z
2022-11-06T00:05:59Z
null
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger. ### Steps to reproduce the bug ```python >> import datasets >> datasets.load_dataset("some-dataset") Downloading and preparing dataset csv/data to <path>... Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 7729.06it/s] Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 527.23it/s] Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data. ``` ### Expected behavior The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-x86_64-i386-64bit - Python version: 3.9.15 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5206/timeline
null
completed
null
null
false
[ "Actually upon closer inspection, it is documented in the code that this behavior is intentional, so I'll close this." ]
https://api.github.com/repos/huggingface/datasets/issues/3740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3740/comments
https://api.github.com/repos/huggingface/datasets/issues/3740/events
https://github.com/huggingface/datasets/pull/3740
1,140,720,739
PR_kwDODunzps4y9XAP
3,740
Support streaming for pubmed
[]
closed
false
null
3
2022-02-17T00:18:22Z
2022-02-18T14:42:13Z
2022-02-18T14:42:13Z
null
This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739. Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes: * Change URL prefix from `ftp://` to `https://` * Explicilty `open` the filename and pass the XML contents to `etree.fromstring(xml_str)` The Github diff tool makes it look like the changes are larger than they are, sorry about that. I tested locally and the `pubmed` dataset now works in both normal and streaming modes. There is some overhead at the start of each shard in streaming mode as building the XML tree online is quite slow (each pubmed .xml.gz file is ~20MB), but the overhead gets amortized over all the samples in the shard. On my laptop with a single CPU worker I am able to stream at about ~600 samples/s.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3740/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3740.diff", "html_url": "https://github.com/huggingface/datasets/pull/3740", "merged_at": "2022-02-18T14:42:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3740" }
true
[ "@albertvillanova just FYI, since you were so helpful with the previous pubmed issue :) ", "IIRC streaming from FTP is not fully tested yet, so I'm fine with switching to HTTPS for now, as long as the download speed/availability is great", "@albertvillanova Thanks for pointing me to the `ET` module replacement....
https://api.github.com/repos/huggingface/datasets/issues/1217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1217/comments
https://api.github.com/repos/huggingface/datasets/issues/1217/events
https://github.com/huggingface/datasets/pull/1217
758,008,321
MDExOlB1bGxSZXF1ZXN0NTMzMjU2MjU4
1,217
adding DataCommons fact checking
[]
closed
false
null
0
2020-12-06T19:56:12Z
2020-12-16T16:22:48Z
2020-12-16T16:22:48Z
null
Adding the data from: https://datacommons.org/factcheck/ Had to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1217/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1217/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1217.diff", "html_url": "https://github.com/huggingface/datasets/pull/1217", "merged_at": "2020-12-16T16:22:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1217.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1217" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1181/comments
https://api.github.com/repos/huggingface/datasets/issues/1181/events
https://github.com/huggingface/datasets/pull/1181
757,791,992
MDExOlB1bGxSZXF1ZXN0NTMzMTAwNjYz
1,181
added emotions detection in arabic dataset
[]
closed
false
null
3
2020-12-05T22:08:46Z
2020-12-21T09:53:51Z
2020-12-21T09:53:51Z
null
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1181/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1181/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1181.diff", "html_url": "https://github.com/huggingface/datasets/pull/1181", "merged_at": "2020-12-21T09:53:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1181.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1181" }
true
[ "Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review", "@lhoestq fixed it! ready to merge. I hope haha", "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/804/comments
https://api.github.com/repos/huggingface/datasets/issues/804/events
https://github.com/huggingface/datasets/issues/804
736,858,507
MDU6SXNzdWU3MzY4NTg1MDc=
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
[]
closed
false
null
3
2020-11-05T11:38:01Z
2020-11-09T14:14:59Z
2020-11-09T14:14:58Z
null
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/804/timeline
null
completed
null
null
false
[ "cc @yjernite is this expected ?", "Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface...
https://api.github.com/repos/huggingface/datasets/issues/3080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3080/comments
https://api.github.com/repos/huggingface/datasets/issues/3080/events
https://github.com/huggingface/datasets/issues/3080
1,026,380,626
I_kwDODunzps49LVNS
3,080
Error related to timeout keyword argument
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2021-10-14T13:10:58Z
2021-10-14T14:39:51Z
2021-10-14T14:39:51Z
null
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got an unexpected keyword argument 'timeout' ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3080/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5554/comments
https://api.github.com/repos/huggingface/datasets/issues/5554/events
https://github.com/huggingface/datasets/pull/5554
1,592,285,062
PR_kwDODunzps5KXhZh
5,554
Add resampy dep
[]
closed
false
null
5
2023-02-20T18:15:43Z
2023-02-21T12:46:10Z
2023-02-21T12:43:38Z
null
In librosa 0.10 they removed the `resmpy` dependency and set it to optional. However it is necessary for resampling. I added it to the "audio" extra dependencies.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5554/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5554.diff", "html_url": "https://github.com/huggingface/datasets/pull/5554", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5554" }
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
https://api.github.com/repos/huggingface/datasets/issues/2531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2531/comments
https://api.github.com/repos/huggingface/datasets/issues/2531/events
https://github.com/huggingface/datasets/pull/2531
927,017,924
MDExOlB1bGxSZXF1ZXN0Njc1MjM2MDYz
2,531
Fix dev version
[]
closed
false
null
0
2021-06-22T09:17:10Z
2021-06-22T09:47:10Z
2021-06-22T09:47:09Z
null
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2531/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2531.diff", "html_url": "https://github.com/huggingface/datasets/pull/2531", "merged_at": "2021-06-22T09:47:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2531.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2531" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1625/comments
https://api.github.com/repos/huggingface/datasets/issues/1625/events
https://github.com/huggingface/datasets/pull/1625
773,771,596
MDExOlB1bGxSZXF1ZXN0NTQ0Nzk4MDM1
1,625
Fixed bug in the shape property
[]
closed
false
null
0
2020-12-23T13:33:21Z
2021-01-02T23:22:52Z
2020-12-23T14:13:13Z
null
Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1625/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1625.diff", "html_url": "https://github.com/huggingface/datasets/pull/1625", "merged_at": "2020-12-23T14:13:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1625.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1625" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4876/comments
https://api.github.com/repos/huggingface/datasets/issues/4876/events
https://github.com/huggingface/datasets/issues/4876
1,348,202,678
I_kwDODunzps5QW_C2
4,876
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
[]
closed
false
null
15
2022-08-23T16:16:41Z
2022-10-03T09:11:13Z
2022-10-03T09:11:13Z
null
Currently there are two places to find metadata for datasets: - datasets_infos.json, which contains **per dataset config** - description - citation - license - splits and sizes - checksums of the data files - feature types - and more - YAML tags, which contain - license - language - train-eval-index - and more It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have. One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant. Here is an example for SQuAD ```yaml download_size: 35142551 dataset_size: 89789763 version: 1.0.0 splits: - name: train num_examples: 87599 num_bytes: 79317110 - name: validation num_examples: 10570 num_bytes: 10472653 features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers struct: - name: text list: dtype: string - name: answer_start list: dtype: int32 ``` Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax ```yaml configs: - config: unlabeled splits: - name: train num_examples: 10000 features: - name: text dtype: string - config: labeled splits: - name: train num_examples: 100 features: - name: text dtype: string - name: label dtype: ClassLabel names: - negative - positive ``` So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4876/timeline
null
completed
null
null
false
[ "also @osanseviero @Pierrci @SBrandeis potentially", "Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config...
https://api.github.com/repos/huggingface/datasets/issues/5237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5237/comments
https://api.github.com/repos/huggingface/datasets/issues/5237/events
https://github.com/huggingface/datasets/pull/5237
1,448,202,491
PR_kwDODunzps5C2KGz
5,237
Encode path only for old versions of hfh
[]
closed
false
null
1
2022-11-14T14:46:57Z
2022-11-14T17:38:18Z
2022-11-14T17:35:59Z
null
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5237/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5237.diff", "html_url": "https://github.com/huggingface/datasets/pull/5237", "merged_at": "2022-11-14T17:35:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/5237.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5237" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4649/comments
https://api.github.com/repos/huggingface/datasets/issues/4649/events
https://github.com/huggingface/datasets/issues/4649
1,296,673,712
I_kwDODunzps5NSauw
4,649
Add PAQ dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
1
2022-07-07T01:29:42Z
2022-07-14T02:06:27Z
2022-07-14T02:06:27Z
null
## Adding a Dataset - **Name:** *PAQ* - **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them* - **Paper:** *https://arxiv.org/abs/2102.07033* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4649/timeline
null
completed
null
null
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)" ]
https://api.github.com/repos/huggingface/datasets/issues/3466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3466/comments
https://api.github.com/repos/huggingface/datasets/issues/3466/events
https://github.com/huggingface/datasets/pull/3466
1,085,722,837
PR_kwDODunzps4wII3w
3,466
Add CRASS dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
2
2021-12-21T11:17:22Z
2022-10-03T09:37:06Z
2022-10-03T09:37:06Z
null
Added crass dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3466/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3466.diff", "html_url": "https://github.com/huggingface/datasets/pull/3466", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3466" }
true
[ "Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)", "Thanks for your contribution, @apergo-ai. \r\n\r\nWe are removing the dataset scripts ...
https://api.github.com/repos/huggingface/datasets/issues/152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/152/comments
https://api.github.com/repos/huggingface/datasets/issues/152/events
https://github.com/huggingface/datasets/pull/152
619,971,900
MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2
152
Add GLUE config name check
[]
closed
false
null
5
2020-05-18T07:23:43Z
2020-05-27T22:09:12Z
2020-05-27T22:09:12Z
null
Fixes #130 by adding a name check to the Glue class
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/152/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/152/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/152.diff", "html_url": "https://github.com/huggingface/datasets/pull/152", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/152.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/152" }
true
[ "If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review", "Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?", "If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the t...
https://api.github.com/repos/huggingface/datasets/issues/2028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
https://api.github.com/repos/huggingface/datasets/issues/2028/events
https://github.com/huggingface/datasets/pull/2028
828,721,393
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
2,028
Adding PersiNLU reading-comprehension
[]
closed
false
null
3
2021-03-11T04:41:13Z
2021-03-15T09:39:57Z
2021-03-15T09:39:57Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "html_url": "https://github.com/huggingface/datasets/pull/2028", "merged_at": "2021-03-15T09:39:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028" }
true
[ "@lhoestq I think I have addressed all your comments. ", "Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ", "It's all good thanks ;)\r\nmerging" ]
https://api.github.com/repos/huggingface/datasets/issues/3316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3316/comments
https://api.github.com/repos/huggingface/datasets/issues/3316/events
https://github.com/huggingface/datasets/issues/3316
1,062,185,822
I_kwDODunzps4_T6te
3,316
Add RedCaps dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc",...
closed
false
null
0
2021-11-24T09:23:02Z
2022-01-12T14:13:15Z
2022-01-12T14:13:15Z
null
## Adding a Dataset - **Name:** RedCaps - **Description:** Web-curated image-text data created by the people, for the people - **Paper:** https://arxiv.org/abs/2111.11431 - **Data:** https://redcaps.xyz/ - **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Proposed by @patil-suraj
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3316/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3316/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3697/comments
https://api.github.com/repos/huggingface/datasets/issues/3697/events
https://github.com/huggingface/datasets/pull/3697
1,129,795,724
PR_kwDODunzps4yXeXo
3,697
Add code-fill datasets for pretraining/finetuning/evaluating
[]
closed
false
null
1
2022-02-10T10:31:48Z
2022-07-06T15:19:58Z
2022-07-06T15:19:58Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3697/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3697.diff", "html_url": "https://github.com/huggingface/datasets/pull/3697", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3697" }
true
[ "Hi ! Thanks for adding this dataset :)\r\n\r\nIt looks like your PR contains many changes in files that are unrelated to your changes, I think it might come from running `make style` with an outdated version of `black`. Could you try opening a new PR that only contains your additions ? (or force push to this PR)" ...
https://api.github.com/repos/huggingface/datasets/issues/4251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4251/comments
https://api.github.com/repos/huggingface/datasets/issues/4251/events
https://github.com/huggingface/datasets/pull/4251
1,219,116,354
PR_kwDODunzps4293dB
4,251
Metric card for the XTREME-S dataset
[]
closed
false
null
1
2022-04-28T18:32:19Z
2022-04-29T16:46:11Z
2022-04-29T16:38:46Z
null
Proposing a metric card for the XTREME-S dataset :hugs:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4251/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4251.diff", "html_url": "https://github.com/huggingface/datasets/pull/4251", "merged_at": "2022-04-29T16:38:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4251" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4678/comments
https://api.github.com/repos/huggingface/datasets/issues/4678/events
https://github.com/huggingface/datasets/issues/4678
1,303,741,432
I_kwDODunzps5NtYP4
4,678
Cant pass streaming dataset to dataloader after take()
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
1
2022-07-13T17:34:18Z
2022-07-14T13:07:21Z
null
null
## Describe the bug I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take. ## Steps to reproduce the bug ```python import datasets import torch dset = datasets.load_dataset(path='c4', name='en', split="train", streaming=True) dset = dset.take(50_000) dset = dset.with_format("torch") num_workers = 8 batch_size = 512 loader = torch.utils.data.DataLoader(dataset=dset, batch_size=batch_size, num_workers=num_workers) for batch in loader: ... ``` ## Expected results No error thrown when iterating over the dataloader ## Actual results Original Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/root/.local/lib/python3.9/site-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py", line 48, in __iter__ for key, example in self._iter_shard(shard_idx): File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 586, in _iter_shard yield from ex_iterable.shard_data_sources(shard_idx) File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 60, in shard_data_sources raise NotImplementedError(f"{type(self)} doesn't implement shard_data_sources yet") NotImplementedError: <class 'datasets.iterable_dataset.TakeExamplesIterable'> doesn't implement shard_data_sources yet ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.31 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4678/timeline
null
null
null
null
false
[ "Hi! Calling `take` on an iterable/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the ...
https://api.github.com/repos/huggingface/datasets/issues/4518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4518/comments
https://api.github.com/repos/huggingface/datasets/issues/4518/events
https://github.com/huggingface/datasets/pull/4518
1,274,010,628
PR_kwDODunzps45zMnB
4,518
Patch tests for hfh v0.8.0
[]
closed
false
null
1
2022-06-16T19:45:32Z
2022-06-17T16:15:57Z
2022-06-17T16:06:07Z
null
This PR patches testing utilities that would otherwise fail with hfh v0.8.0.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4518/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4518.diff", "html_url": "https://github.com/huggingface/datasets/pull/4518", "merged_at": "2022-06-17T16:06:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4518" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
[]
open
false
null
2
2023-03-21T09:33:27Z
2023-03-21T10:32:07Z
null
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
null
null
false
[ "Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n", "As a workaround, one can replace\r\n`return {\"hr\": to...
https://api.github.com/repos/huggingface/datasets/issues/1546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1546/comments
https://api.github.com/repos/huggingface/datasets/issues/1546/events
https://github.com/huggingface/datasets/pull/1546
765,559,923
MDExOlB1bGxSZXF1ZXN0NTM4OTkwMjgw
1,546
Add persian ner dataset
[]
closed
false
null
3
2020-12-13T17:45:48Z
2020-12-23T09:53:03Z
2020-12-23T09:53:03Z
null
Adding the following dataset: https://github.com/HaniehP/PersianNER
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1546/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1546.diff", "html_url": "https://github.com/huggingface/datasets/pull/1546", "merged_at": "2020-12-23T09:53:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/1546.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1546" }
true
[ "HI @SBrandeis. Thanks for all the comments - very helpful. I realised that the tests had failed and had been trying to figure out what was causing them to do so. All the tests pass when I run the load_real_dataset test however when I run `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_...
https://api.github.com/repos/huggingface/datasets/issues/1059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1059/comments
https://api.github.com/repos/huggingface/datasets/issues/1059/events
https://github.com/huggingface/datasets/pull/1059
756,348,623
MDExOlB1bGxSZXF1ZXN0NTMxOTA3ODYy
1,059
Add TLC
[]
closed
false
null
3
2020-12-03T16:23:06Z
2020-12-04T11:15:33Z
2020-12-04T11:15:33Z
null
Added TLC dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1059/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1059/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1059.diff", "html_url": "https://github.com/huggingface/datasets/pull/1059", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1059.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1059" }
true
[ "I have reduced the size of the dummy file and added README sections as you suggested. ", "I have a little issue to run the test. It seems there is no failed case in my machine. ", "Thanks !\r\nIt looks like the PR includes changes to many other files than the ones of `tlc`, can you create another branch and an...
https://api.github.com/repos/huggingface/datasets/issues/2186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2186/comments
https://api.github.com/repos/huggingface/datasets/issues/2186/events
https://github.com/huggingface/datasets/pull/2186
852,840,819
MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0
2,186
GEM: new challenge sets
[]
closed
false
null
1
2021-04-07T21:39:07Z
2021-04-07T21:56:35Z
2021-04-07T21:56:35Z
null
This PR updates the GEM dataset to: - remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source - add context and services to Schema Guided Dialog - Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2186/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2186.diff", "html_url": "https://github.com/huggingface/datasets/pull/2186", "merged_at": "2021-04-07T21:56:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2186.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2186" }
true
[ "cc @sebastiangehrmann" ]
https://api.github.com/repos/huggingface/datasets/issues/4904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4904/comments
https://api.github.com/repos/huggingface/datasets/issues/4904/events
https://github.com/huggingface/datasets/pull/4904
1,353,002,837
PR_kwDODunzps4959Ad
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
[]
closed
false
null
2
2022-08-27T10:04:57Z
2022-08-30T10:06:21Z
2022-08-30T10:03:25Z
null
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212 https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219 The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`. When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263 Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`). This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4904/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4904.diff", "html_url": "https://github.com/huggingface/datasets/pull/4904", "merged_at": "2022-08-30T10:03:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4904" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "This PR fixes a bug introduced in:\r\n- #4184" ]
https://api.github.com/repos/huggingface/datasets/issues/2471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2471/comments
https://api.github.com/repos/huggingface/datasets/issues/2471/events
https://github.com/huggingface/datasets/issues/2471
917,067,165
MDU6SXNzdWU5MTcwNjcxNjU=
2,471
Fix PermissionError on Windows when using tqdm >=4.50.0
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
0
2021-06-10T08:31:49Z
2021-06-11T15:11:50Z
2021-06-11T15:11:50Z
null
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2471/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3837/comments
https://api.github.com/repos/huggingface/datasets/issues/3837/events
https://github.com/huggingface/datasets/pull/3837
1,161,109,031
PR_kwDODunzps40BwE1
3,837
Release: 1.18.4
[]
closed
false
null
0
2022-03-07T09:13:29Z
2022-03-07T11:07:35Z
2022-03-07T11:07:02Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3837/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3837.diff", "html_url": "https://github.com/huggingface/datasets/pull/3837", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3837" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/616/comments
https://api.github.com/repos/huggingface/datasets/issues/616/events
https://github.com/huggingface/datasets/issues/616
699,462,293
MDU6SXNzdWU2OTk0NjIyOTM=
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
[]
open
false
null
14
2020-09-11T15:39:16Z
2021-07-22T21:12:21Z
null
null
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace: > Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns. > C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\datasets\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:141.) > return torch.tensor(x, **format_kwargs) The first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment. Snippet: ``` dataset = Dataset.from_dict(torch.load("data/dummy.pt.pt")) print(dataset) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") keys_to_retain = {"input_ids", "sembedding"} dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True) dataset.remove_columns_(set(dataset.column_names) - keys_to_retain) dataset.set_format(type="torch", columns=["input_ids", "sembedding"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=2) print(next(iter(dataloader))) ``` PS: the input type for `remove_columns_` should probably be an Iterable rather than just a List.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/616/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/616/timeline
null
null
null
null
false
[ "I have the same issue", "Same issue here when Trying to load a dataset from disk.", "I am also experiencing this issue, and don't know if it's affecting my training.", "Same here. I hope the dataset is not being modified in-place.", "I think the only way to avoid this warning would be to do a copy of the n...
https://api.github.com/repos/huggingface/datasets/issues/2016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2016/comments
https://api.github.com/repos/huggingface/datasets/issues/2016/events
https://github.com/huggingface/datasets/pull/2016
825,965,493
MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz
2,016
Not all languages have 2 digit codes.
[]
closed
false
null
0
2021-03-09T13:53:39Z
2021-03-11T18:01:03Z
2021-03-11T18:01:03Z
null
.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2016/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2016.diff", "html_url": "https://github.com/huggingface/datasets/pull/2016", "merged_at": "2021-03-11T18:01:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2016.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2016" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3605/comments
https://api.github.com/repos/huggingface/datasets/issues/3605/events
https://github.com/huggingface/datasets/pull/3605
1,108,738,561
PR_kwDODunzps4xS9rX
3,605
Adding Turkic X-WMT evaluation set for machine translation
[]
closed
false
null
5
2022-01-20T01:40:29Z
2022-01-31T09:50:57Z
2022-01-31T09:50:57Z
null
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) Kazakh (kk) Kirghiz (ky) Russian (ru) Turkish (tr) Sakha (sah) Uzbek (uz) More info about the corpus is here: [https://github.com/turkic-interlingua/til-mt/tree/master/xwmt](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt) A paper describing the test set is here: [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3605/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3605/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3605.diff", "html_url": "https://github.com/huggingface/datasets/pull/3605", "merged_at": "2022-01-31T09:50:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3605.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3605" }
true
[ "hi! Thank you for all the comments! I believe I addressed them all. Let me know if there is anything else", "Hi there! I was wondering if there is anything else to change before this can be merged", "@lhoestq Hi! Just a gentle reminder about the steps to merge this one! ", "Thanks for the heads up ! I think ...
https://api.github.com/repos/huggingface/datasets/issues/1499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1499/comments
https://api.github.com/repos/huggingface/datasets/issues/1499/events
https://github.com/huggingface/datasets/pull/1499
763,464,693
MDExOlB1bGxSZXF1ZXN0NTM3OTIyNjA3
1,499
update the dataset id_newspapers_2018
[]
closed
false
null
0
2020-12-12T08:47:12Z
2020-12-14T15:28:07Z
2020-12-14T15:28:07Z
null
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1499/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1499.diff", "html_url": "https://github.com/huggingface/datasets/pull/1499", "merged_at": "2020-12-14T15:28:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1499.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1499" }
true
[]