id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
987,676,420
2,869
TypeError: 'NoneType' object is not callable
closed
[ "Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n ...
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Speci...
Chenfei-Kang
https://github.com/huggingface/datasets/issues/2869
null
false
987,139,146
2,868
Add Common Objects in 3D (CO3D)
open
[]
2021-09-02T20:36:12
2024-01-17T12:03:59
null
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-...
nateraw
https://github.com/huggingface/datasets/issues/2868
null
false
986,971,224
2,867
Add CaSiNo dataset
closed
[ "Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.", "Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I ...
2021-09-02T17:06:23
2021-09-16T15:12:54
2021-09-16T09:23:44
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
kushalchawla
https://github.com/huggingface/datasets/pull/2867
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2867", "html_url": "https://github.com/huggingface/datasets/pull/2867", "diff_url": "https://github.com/huggingface/datasets/pull/2867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2867.patch", "merged_at": "2021-09-16T09:23...
true
986,706,676
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
closed
[ "Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.", "OK. Do you think it's possible to de...
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Dow...
severo
https://github.com/huggingface/datasets/issues/2866
null
false
986,460,698
2,865
Add MultiEURLEX dataset
closed
[ "Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ", "Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was upda...
2021-09-02T09:42:24
2021-09-10T11:50:06
2021-09-10T11:50:06
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is mult...
iliaschalkidis
https://github.com/huggingface/datasets/pull/2865
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2865", "html_url": "https://github.com/huggingface/datasets/pull/2865", "diff_url": "https://github.com/huggingface/datasets/pull/2865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2865.patch", "merged_at": "2021-09-10T11:50...
true
986,159,438
2,864
Fix data URL in ToTTo dataset
closed
[]
2021-09-02T05:25:08
2021-09-02T06:47:40
2021-09-02T06:47:40
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
albertvillanova
https://github.com/huggingface/datasets/pull/2864
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864", "html_url": "https://github.com/huggingface/datasets/pull/2864", "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "merged_at": "2021-09-02T06:47...
true
986,156,755
2,863
Update dataset URL
closed
[ "Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. 😉 " ]
2021-09-02T05:22:18
2021-09-02T08:10:50
2021-09-02T08:10:50
null
mrm8488
https://github.com/huggingface/datasets/pull/2863
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2863", "html_url": "https://github.com/huggingface/datasets/pull/2863", "diff_url": "https://github.com/huggingface/datasets/pull/2863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2863.patch", "merged_at": null }
true
985,081,871
2,861
fix: 🐛 be more specific when catching exceptions
closed
[ "To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n", "Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, wh...
2021-09-01T12:18:12
2021-09-02T09:53:36
2021-09-02T09:52:03
The same specific exception is catched in other parts of the same function.
severo
https://github.com/huggingface/datasets/pull/2861
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2861", "html_url": "https://github.com/huggingface/datasets/pull/2861", "diff_url": "https://github.com/huggingface/datasets/pull/2861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2861.patch", "merged_at": null }
true
985,013,339
2,860
Cannot download TOTTO dataset
closed
[ "Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it." ]
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
mrm8488
https://github.com/huggingface/datasets/issues/2860
null
false
984,324,500
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
closed
[ "https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186", "Thanks a lot!!!" ]
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). ...
lhoestq
https://github.com/huggingface/datasets/issues/2859
null
false
984,145,568
2,858
Fix s3fs version in CI
closed
[]
2021-08-31T18:05:43
2021-09-06T13:33:35
2021-08-31T21:29:51
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
lhoestq
https://github.com/huggingface/datasets/pull/2858
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2858", "html_url": "https://github.com/huggingface/datasets/pull/2858", "diff_url": "https://github.com/huggingface/datasets/pull/2858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2858.patch", "merged_at": "2021-08-31T21:29...
true
984,093,938
2,857
Update: Openwebtext - update size
closed
[ "merging since the CI error in unrelated to this PR and fixed on master" ]
2021-08-31T17:11:03
2022-02-15T10:38:03
2021-09-07T09:44:32
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) Close #2839, close #726.
lhoestq
https://github.com/huggingface/datasets/pull/2857
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2857", "html_url": "https://github.com/huggingface/datasets/pull/2857", "diff_url": "https://github.com/huggingface/datasets/pull/2857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2857.patch", "merged_at": "2021-09-07T09:44...
true
983,876,734
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
closed
[]
2021-08-31T13:40:07
2021-08-31T14:22:12
2021-08-31T14:22:12
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See ht...
severo
https://github.com/huggingface/datasets/pull/2856
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2856", "html_url": "https://github.com/huggingface/datasets/pull/2856", "diff_url": "https://github.com/huggingface/datasets/pull/2856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2856.patch", "merged_at": "2021-08-31T14:22...
true
983,858,229
2,855
Fix windows CI CondaError
closed
[]
2021-08-31T13:22:02
2021-08-31T13:35:34
2021-08-31T13:35:33
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
lhoestq
https://github.com/huggingface/datasets/pull/2855
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2855", "html_url": "https://github.com/huggingface/datasets/pull/2855", "diff_url": "https://github.com/huggingface/datasets/pull/2855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2855.patch", "merged_at": "2021-08-31T13:35...
true
983,726,084
2,854
Fix caching when moving script
closed
[ "Merging since the CI failure is unrelated to this PR" ]
2021-08-31T10:58:35
2021-08-31T13:13:36
2021-08-31T13:13:36
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code. Using the full path of the python script for the location of the code makes the hash change if a script like `run_ml...
lhoestq
https://github.com/huggingface/datasets/pull/2854
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2854", "html_url": "https://github.com/huggingface/datasets/pull/2854", "diff_url": "https://github.com/huggingface/datasets/pull/2854.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2854.patch", "merged_at": "2021-08-31T13:13...
true
983,692,026
2,853
Add AMI dataset
closed
[ "Hey @cahya-wirawan, \r\n\r\nI played around with the dataset a bit and it looks already very good to me! That's exactly how it should be constructed :-) I can help you a bit with defining the config, etc... on Monday!", "@lhoestq - I think the dataset is ready to be merged :-) \r\n\r\nAt the moment, I don't real...
2021-08-31T10:19:01
2021-09-29T09:19:19
2021-09-29T09:19:19
This is an initial commit for AMI dataset
cahya-wirawan
https://github.com/huggingface/datasets/pull/2853
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2853", "html_url": "https://github.com/huggingface/datasets/pull/2853", "diff_url": "https://github.com/huggingface/datasets/pull/2853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2853.patch", "merged_at": "2021-09-29T09:19...
true
983,609,352
2,852
Fix: linnaeus - fix url
closed
[ "Merging since the CI error is unrelated this this PR" ]
2021-08-31T08:51:13
2021-08-31T13:12:10
2021-08-31T13:12:09
The url was causing a `ConnectionError` because of the "/" at the end Close https://github.com/huggingface/datasets/issues/2821
lhoestq
https://github.com/huggingface/datasets/pull/2852
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2852", "html_url": "https://github.com/huggingface/datasets/pull/2852", "diff_url": "https://github.com/huggingface/datasets/pull/2852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2852.patch", "merged_at": "2021-08-31T13:12...
true
982,789,593
2,851
Update `column_names` showed as `:func:` in exploring.st
closed
[]
2021-08-30T13:21:46
2021-09-01T08:42:11
2021-08-31T14:45:46
Hi, One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.
ClementRomac
https://github.com/huggingface/datasets/pull/2851
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2851", "html_url": "https://github.com/huggingface/datasets/pull/2851", "diff_url": "https://github.com/huggingface/datasets/pull/2851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2851.patch", "merged_at": "2021-08-31T14:45...
true
982,654,644
2,850
Wound segmentation datasets
open
[]
2021-08-30T10:44:32
2021-12-08T12:02:00
null
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, wi...
osanseviero
https://github.com/huggingface/datasets/issues/2850
null
false
982,631,420
2,849
Add Open Catalyst Project Dataset
open
[]
2021-08-30T10:14:39
2021-08-30T10:14:39
null
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATAS...
osanseviero
https://github.com/huggingface/datasets/issues/2849
null
false
981,953,908
2,848
Update README.md
closed
[ "Merging since the CI error is unrelated to this PR and fixed on master" ]
2021-08-28T23:58:26
2021-09-07T09:40:32
2021-09-07T09:40:32
Changed 'Tain' to 'Train'.
odellus
https://github.com/huggingface/datasets/pull/2848
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2848", "html_url": "https://github.com/huggingface/datasets/pull/2848", "diff_url": "https://github.com/huggingface/datasets/pull/2848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2848.patch", "merged_at": "2021-09-07T09:40...
true
981,589,693
2,847
fix regex to accept negative timezone
closed
[]
2021-08-27T20:54:05
2021-09-13T20:39:50
2021-09-07T09:34:23
fix #2846
jadermcs
https://github.com/huggingface/datasets/pull/2847
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2847", "html_url": "https://github.com/huggingface/datasets/pull/2847", "diff_url": "https://github.com/huggingface/datasets/pull/2847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2847.patch", "merged_at": "2021-09-07T09:34...
true
981,587,590
2,846
Negative timezone
closed
[ "Fixed by #2847." ]
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```py...
jadermcs
https://github.com/huggingface/datasets/issues/2846
null
false
981,487,861
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
open
[]
2021-08-27T18:21:51
2021-08-27T18:24:05
null
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsi...
stas00
https://github.com/huggingface/datasets/issues/2845
null
false
981,382,806
2,844
Fix: wikicorpus - fix keys
closed
[ "The CI error is unrelated to this PR\r\n\r\n... merging !" ]
2021-08-27T15:56:06
2021-09-06T14:07:28
2021-09-06T14:07:27
As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`. I fixed that by taking into account the file index in the keys
lhoestq
https://github.com/huggingface/datasets/pull/2844
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2844", "html_url": "https://github.com/huggingface/datasets/pull/2844", "diff_url": "https://github.com/huggingface/datasets/pull/2844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2844.patch", "merged_at": "2021-09-06T14:07...
true
981,317,775
2,843
Fix extraction protocol inference from urls with params
closed
[ "merging since the windows error is just a CircleCI issue", "It works, eg https://observablehq.com/@huggingface/datasets-preview-backend-client#{%22datasetId%22%3A%22discovery%22} and https://datasets-preview.huggingface.tech/rows?dataset=discovery&config=discovery&split=train", "Nice !" ]
2021-08-27T14:40:57
2021-08-30T17:11:49
2021-08-30T13:12:01
Previously it was unable to infer the compression protocol for files at URLs like ``` https://foo.bar/train.json.gz?dl=1 ``` because of the query parameters. I fixed that, this should allow 10+ datasets to work in streaming mode: ``` "discovery", "emotion", "grail_qa", "guardian_authorship", "pra...
lhoestq
https://github.com/huggingface/datasets/pull/2843
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2843", "html_url": "https://github.com/huggingface/datasets/pull/2843", "diff_url": "https://github.com/huggingface/datasets/pull/2843.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2843.patch", "merged_at": "2021-08-30T13:12...
true
980,725,899
2,842
always requiring the username in the dataset name when there is one
closed
[ "From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?", "I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\...
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an...
stas00
https://github.com/huggingface/datasets/issues/2842
null
false
980,497,321
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
open
[ "Hi @yjernite I am interested in adding this dataset. \r\nIn the repo they have also added a code mixed MT task from English to Hinglish [here](https://github.com/microsoft/GLUECoS#code-mixed-machine-translation-task). I think this could be a good dataset addition in itself and then I can add the rest of the GLUECo...
2021-08-26T17:47:39
2021-10-20T18:41:20
null
## Adding a Dataset - **Name:** GLUECoS - **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks - **Paper:** https://aclanthology.org/2020.acl-main.329/ - **Data:** https://github.com/microsoft/GLUECoS - **Motivation:** We currently only have [one othe...
yjernite
https://github.com/huggingface/datasets/issues/2841
null
false
980,489,074
2,840
How can I compute BLEU-4 score use `load_metric` ?
closed
[]
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4. If I want to compute BLEU-4 score, what can i do?
Doragd
https://github.com/huggingface/datasets/issues/2840
null
false
980,271,715
2,839
OpenWebText: NonMatchingSplitsSizesError
closed
[ "Thanks for reporting, I'm updating the verifications metadata", "I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.\r\n\r\nCan you try to...
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
## Describe the bug When downloading `openwebtext`, I'm getting: ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430...
thomasw21
https://github.com/huggingface/datasets/issues/2839
null
false
980,067,186
2,838
Add error_bad_chunk to the JSON loader
open
[ "Somebody reported the following error message which I think this is related to the goal of this PR:\r\n```Python\r\n03/24/2022 02:19:45 - INFO - __main__ - Step 5637: {'lr': 0.00018773333333333333, 'samples': 360768, 'batch_offset': 5637, 'completed_steps': 704, 'loss/train': 4.473083972930908, 'tokens/s': 6692.61...
2021-08-26T10:07:32
2023-09-25T09:06:42
null
Add the `error_bad_chunk` parameter to the JSON loader. Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error. Additional note: In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in stream...
lhoestq
https://github.com/huggingface/datasets/pull/2838
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2838", "html_url": "https://github.com/huggingface/datasets/pull/2838", "diff_url": "https://github.com/huggingface/datasets/pull/2838.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2838.patch", "merged_at": null }
true
979,298,297
2,837
prepare_module issue when loading from read-only fs
closed
[ "Hello, I opened #2887 to fix this." ]
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_datas...
Dref360
https://github.com/huggingface/datasets/issues/2837
null
false
979,230,142
2,836
Optimize Dataset.filter to only compute the indices to keep
closed
[ "Maybe worth updating the docs here as well?", "Yup, will do !" ]
2021-08-25T14:41:22
2021-09-14T14:51:53
2021-09-13T15:50:21
Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space. This will be useful to process audio datasets for example cc @patrickvonplaten
lhoestq
https://github.com/huggingface/datasets/pull/2836
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2836", "html_url": "https://github.com/huggingface/datasets/pull/2836", "diff_url": "https://github.com/huggingface/datasets/pull/2836.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2836.patch", "merged_at": "2021-09-13T15:50...
true
979,209,394
2,835
Update: timit_asr - make the dataset streamable
closed
[]
2021-08-25T14:22:49
2021-09-07T13:15:47
2021-09-07T13:15:46
The TIMIT ASR dataset had two issues that was preventing it from being streamable: 1. it was missing a call to `open` before `pd.read_csv` 2. it was using `os.path.dirname` which is not supported for streaming I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.d...
lhoestq
https://github.com/huggingface/datasets/pull/2835
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2835", "html_url": "https://github.com/huggingface/datasets/pull/2835", "diff_url": "https://github.com/huggingface/datasets/pull/2835.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2835.patch", "merged_at": "2021-09-07T13:15...
true
978,309,749
2,834
Fix IndexError by ignoring empty RecordBatch
closed
[]
2021-08-24T17:06:13
2021-08-24T17:21:18
2021-08-24T17:21:18
We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables Close #2833 cc @SaulLu
lhoestq
https://github.com/huggingface/datasets/pull/2834
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2834", "html_url": "https://github.com/huggingface/datasets/pull/2834", "diff_url": "https://github.com/huggingface/datasets/pull/2834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2834.patch", "merged_at": "2021-08-24T17:21...
true
978,296,140
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
closed
[]
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.conca...
lhoestq
https://github.com/huggingface/datasets/issues/2833
null
false
978,012,800
2,832
Logging levels not taken into account
closed
[ "I just take a look at all the outputs produced by `datasets` using the different log-levels.\r\nAs far as i can tell using `datasets==1.17.0` they overall issue seems to be fixed.\r\n\r\nHowever, I noticed that there is one tqdm based progress indicator appearing on STDERR that I can simply not suppress.\r\n```\r\...
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logge...
LysandreJik
https://github.com/huggingface/datasets/issues/2832
null
false
977,864,600
2,831
ArrowInvalid when mapping dataset with missing values
open
[ "Hi ! It fails because of the feature type inference.\r\n\r\nBecause the first 1000 examples all have null values in the \"match\" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null \"match\" field, then it fai...
2021-08-24T08:50:42
2021-08-31T14:15:34
null
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingf...
uniquefine
https://github.com/huggingface/datasets/issues/2831
null
false
977,563,947
2,830
Add imagefolder dataset
closed
[ "@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible? \r\n\r\nMy hacky community version [here](https://huggingface.co/datasets/na...
2021-08-23T23:34:06
2022-03-01T16:29:44
2022-03-01T16:29:44
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. Resolves #2508 --- Example Usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
nateraw
https://github.com/huggingface/datasets/pull/2830
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2830", "html_url": "https://github.com/huggingface/datasets/pull/2830", "diff_url": "https://github.com/huggingface/datasets/pull/2830.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2830.patch", "merged_at": "2022-03-01T16:29...
true
977,233,360
2,829
Optimize streaming from TAR archives
closed
[ "Closed by: \r\n- #3066" ]
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage....
lhoestq
https://github.com/huggingface/datasets/issues/2829
null
false
977,181,517
2,828
Add code-mixed Kannada Hope speech dataset
closed
[]
2021-08-23T15:55:09
2021-10-01T17:21:03
2021-10-01T17:21:03
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available...
adeepH
https://github.com/huggingface/datasets/pull/2828
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2828", "html_url": "https://github.com/huggingface/datasets/pull/2828", "diff_url": "https://github.com/huggingface/datasets/pull/2828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2828.patch", "merged_at": null }
true
976,976,552
2,827
add a text classification dataset
closed
[]
2021-08-23T12:24:41
2021-08-23T15:51:18
2021-08-23T15:51:18
null
adeepH
https://github.com/huggingface/datasets/pull/2827
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2827", "html_url": "https://github.com/huggingface/datasets/pull/2827", "diff_url": "https://github.com/huggingface/datasets/pull/2827.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2827.patch", "merged_at": null }
true
976,974,254
2,826
Add a Text Classification dataset: KanHope
closed
[ "Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.\r\n\r\nMoreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make ...
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/d...
adeepH
https://github.com/huggingface/datasets/issues/2826
null
false
976,584,926
2,825
The datasets.map function does not load cached dataset after moving python script
closed
[ "This also happened to me on COLAB.\r\nDetails:\r\nI ran the `run_mlm.py` in two different notebooks. \r\nIn the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.\r\nIn the second notebook, I copy the cache folder from drive and r...
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro...
hobbitlzy
https://github.com/huggingface/datasets/issues/2825
null
false
976,394,721
2,824
Fix defaults in cache_dir docstring in load.py
closed
[]
2021-08-22T14:48:37
2021-08-26T13:23:32
2021-08-26T11:55:16
Fix defaults in the `cache_dir` docstring.
mariosasko
https://github.com/huggingface/datasets/pull/2824
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2824", "html_url": "https://github.com/huggingface/datasets/pull/2824", "diff_url": "https://github.com/huggingface/datasets/pull/2824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2824.patch", "merged_at": "2021-08-26T11:55...
true
976,135,355
2,823
HF_DATASETS_CACHE variable in Windows
closed
[ "Agh - I'm a muppet. No quote marks are needed.\r\nset HF_DATASETS_CACHE = C:\\Datasets\r\nworks as intended." ]
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each in...
rp2839
https://github.com/huggingface/datasets/issues/2823
null
false
975,744,463
2,822
Add url prefix convention for many compression formats
closed
[ "Thanks for the feedback :) I will also complete the documentation to explain this convention", "I just added some documentation about how streaming works with chained URLs.\r\n\r\nI will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentatio...
2021-08-20T16:11:23
2021-08-23T15:59:16
2021-08-23T15:59:14
## Intro When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`. In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the...
lhoestq
https://github.com/huggingface/datasets/pull/2822
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2822", "html_url": "https://github.com/huggingface/datasets/pull/2822", "diff_url": "https://github.com/huggingface/datasets/pull/2822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2822.patch", "merged_at": "2021-08-23T15:59...
true
975,556,032
2,821
Cannot load linnaeus dataset
closed
[ "Thanks for reporting ! #2852 fixed this error\r\n\r\nWe'll do a new release of `datasets` soon :)" ]
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB,...
NielsRogge
https://github.com/huggingface/datasets/issues/2821
null
false
975,210,712
2,820
Downloading “reddit” dataset keeps timing out.
closed
[ "```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afc...
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_d...
smeyerhot
https://github.com/huggingface/datasets/issues/2820
null
false
974,683,155
2,819
Added XL-Sum dataset
closed
[ "Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000", "The CI failure is unrelated to this PR - let me take a look", "> Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000\r\n\r\nThank you for upd...
2021-08-19T13:47:45
2021-09-29T08:13:44
2021-09-23T17:49:05
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
abhik1505040
https://github.com/huggingface/datasets/pull/2819
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2819", "html_url": "https://github.com/huggingface/datasets/pull/2819", "diff_url": "https://github.com/huggingface/datasets/pull/2819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2819.patch", "merged_at": null }
true
974,552,009
2,818
cannot load data from my loacal path
closed
[ "Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict.\r\n\r\nCan you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?" ]
2021-08-19T11:13:30
2023-07-25T17:42:15
2023-07-25T17:42:15
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tari...
yang-collect
https://github.com/huggingface/datasets/issues/2818
null
false
974,486,051
2,817
Rename The Pile subsets
closed
[ "Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?", "I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subset...
2021-08-19T09:56:22
2021-08-23T16:24:10
2021-08-23T16:24:09
After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names. I'm doing the changes for the subsets that @richarddwang added: - [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801 - [x] stack_exchange -> the_pile_stack_ex...
lhoestq
https://github.com/huggingface/datasets/pull/2817
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2817", "html_url": "https://github.com/huggingface/datasets/pull/2817", "diff_url": "https://github.com/huggingface/datasets/pull/2817.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2817.patch", "merged_at": "2021-08-23T16:24...
true
974,031,404
2,816
Add Mostly Basic Python Problems Dataset
open
[ "I started working on that." ]
2021-08-18T20:28:39
2021-09-10T08:04:20
null
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consi...
osanseviero
https://github.com/huggingface/datasets/issues/2816
null
false
973,862,024
2,815
Tiny typo fixes of "fo" -> "of"
closed
[]
2021-08-18T16:36:11
2021-08-19T08:03:02
2021-08-19T08:03:02
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
aronszanto
https://github.com/huggingface/datasets/pull/2815
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2815", "html_url": "https://github.com/huggingface/datasets/pull/2815", "diff_url": "https://github.com/huggingface/datasets/pull/2815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2815.patch", "merged_at": "2021-08-19T08:03...
true
973,632,645
2,814
Bump tqdm version
closed
[]
2021-08-18T12:51:29
2021-08-18T13:44:11
2021-08-18T13:39:50
The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would p...
mariosasko
https://github.com/huggingface/datasets/pull/2814
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2814", "html_url": "https://github.com/huggingface/datasets/pull/2814", "diff_url": "https://github.com/huggingface/datasets/pull/2814.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2814.patch", "merged_at": "2021-08-18T13:39...
true
973,470,580
2,813
Remove compression from xopen
closed
[ "After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http://domain.org/filename.bz2`\r\n- `xopen` parses the `urlpath` a...
2021-08-18T09:35:59
2021-08-23T15:59:14
2021-08-23T15:59:14
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve ...
albertvillanova
https://github.com/huggingface/datasets/issues/2813
null
false
972,936,889
2,812
arXiv Dataset verification problem
open
[]
2021-08-17T18:01:48
2022-01-19T14:15:35
null
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
eladsegal
https://github.com/huggingface/datasets/issues/2812
null
false
972,522,480
2,811
Fix stream oscar
closed
[ "One additional note: if we can try to not change the code of oscar.py too often, I'm sure users that have it in their cache directory will be happy to not have to redownload it every time they update the library ;)\r\n\r\n(since changing the code changes the cache directory of the dataset)", "I don't think this ...
2021-08-17T10:10:59
2021-08-26T10:26:15
2021-08-26T10:26:14
Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4. This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921 This PR: - removes that additional `open` - patches `gzip.open` with `xop...
albertvillanova
https://github.com/huggingface/datasets/pull/2811
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2811", "html_url": "https://github.com/huggingface/datasets/pull/2811", "diff_url": "https://github.com/huggingface/datasets/pull/2811.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2811.patch", "merged_at": null }
true
972,040,022
2,810
Add WIT Dataset
closed
[ "Google's version of WIT is now available here: https://huggingface.co/datasets/google/wit" ]
2021-08-16T19:34:09
2022-05-06T12:27:29
2022-05-06T12:26:16
Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset.
hassiahk
https://github.com/huggingface/datasets/pull/2810
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2810", "html_url": "https://github.com/huggingface/datasets/pull/2810", "diff_url": "https://github.com/huggingface/datasets/pull/2810.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2810.patch", "merged_at": null }
true
971,902,613
2,809
Add Beans Dataset
closed
[]
2021-08-16T16:22:33
2021-08-26T11:42:27
2021-08-26T11:42:27
Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset.
nateraw
https://github.com/huggingface/datasets/pull/2809
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2809", "html_url": "https://github.com/huggingface/datasets/pull/2809", "diff_url": "https://github.com/huggingface/datasets/pull/2809.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2809.patch", "merged_at": "2021-08-26T11:42...
true
971,882,320
2,808
Enable streaming for Wikipedia corpora
closed
[ "Closing as this has been addressed in https://github.com/huggingface/datasets/pull/5689." ]
2021-08-16T15:59:12
2023-07-20T13:45:30
2023-07-20T13:45:30
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import ...
lewtun
https://github.com/huggingface/datasets/issues/2808
null
false
971,849,863
2,807
Add cats_vs_dogs dataset
closed
[]
2021-08-16T15:21:11
2021-08-30T16:35:25
2021-08-30T16:35:24
Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset.
nateraw
https://github.com/huggingface/datasets/pull/2807
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2807", "html_url": "https://github.com/huggingface/datasets/pull/2807", "diff_url": "https://github.com/huggingface/datasets/pull/2807.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2807.patch", "merged_at": "2021-08-30T16:35...
true
971,625,449
2,806
Fix streaming tar files from canonical datasets
closed
[ "In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n...
2021-08-16T11:10:28
2021-10-13T09:04:03
2021-10-13T09:04:02
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both f...
albertvillanova
https://github.com/huggingface/datasets/pull/2806
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2806", "html_url": "https://github.com/huggingface/datasets/pull/2806", "diff_url": "https://github.com/huggingface/datasets/pull/2806.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2806.patch", "merged_at": null }
true
971,436,456
2,805
Fix streaming zip files from canonical datasets
closed
[]
2021-08-16T07:11:40
2021-08-16T10:34:00
2021-08-16T10:34:00
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This P...
albertvillanova
https://github.com/huggingface/datasets/pull/2805
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2805", "html_url": "https://github.com/huggingface/datasets/pull/2805", "diff_url": "https://github.com/huggingface/datasets/pull/2805.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2805.patch", "merged_at": "2021-08-16T10:34...
true
971,353,437
2,804
Add Food-101
closed
[]
2021-08-16T04:26:15
2021-08-20T14:31:33
2021-08-19T12:48:06
Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
nateraw
https://github.com/huggingface/datasets/pull/2804
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2804", "html_url": "https://github.com/huggingface/datasets/pull/2804", "diff_url": "https://github.com/huggingface/datasets/pull/2804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2804.patch", "merged_at": "2021-08-19T12:48...
true
970,858,928
2,803
add stack exchange
closed
[ "Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming...
2021-08-14T08:11:02
2021-08-19T10:07:33
2021-08-19T08:07:38
stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. I also change default `timeout` to 100 seconds instead of 10...
richarddwang
https://github.com/huggingface/datasets/pull/2803
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2803", "html_url": "https://github.com/huggingface/datasets/pull/2803", "diff_url": "https://github.com/huggingface/datasets/pull/2803.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2803.patch", "merged_at": "2021-08-19T08:07...
true
970,848,302
2,802
add openwebtext2
closed
[ "It seems we need to `pip install jsonlines` to pass the checks ?", "Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replac...
2021-08-14T07:09:03
2021-08-23T14:06:14
2021-08-23T14:06:14
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for cr...
richarddwang
https://github.com/huggingface/datasets/pull/2802
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2802", "html_url": "https://github.com/huggingface/datasets/pull/2802", "diff_url": "https://github.com/huggingface/datasets/pull/2802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2802.patch", "merged_at": "2021-08-23T14:06...
true
970,844,617
2,801
add books3
closed
[ "> When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797\r\n\r\nThanks for the message, we'll definitely improve this\r\n\r\n> Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675...
2021-08-14T07:04:25
2021-08-19T16:43:09
2021-08-18T15:36:59
books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating...
richarddwang
https://github.com/huggingface/datasets/pull/2801
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2801", "html_url": "https://github.com/huggingface/datasets/pull/2801", "diff_url": "https://github.com/huggingface/datasets/pull/2801.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2801.patch", "merged_at": "2021-08-18T15:36...
true
970,819,988
2,800
Support streaming tar files
closed
[ "Hi ! Why do we need the custom `readline` for exactly ? feel free to add a comment to say why it's needed" ]
2021-08-14T04:40:17
2021-08-26T10:02:30
2021-08-14T04:55:57
This PR adds support to stream tar files by using the `fsspec` tar protocol. It also uses the custom `readline` implemented in PR #2786. The corresponding test is implemented in PR #2786.
albertvillanova
https://github.com/huggingface/datasets/pull/2800
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2800", "html_url": "https://github.com/huggingface/datasets/pull/2800", "diff_url": "https://github.com/huggingface/datasets/pull/2800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2800.patch", "merged_at": "2021-08-14T04:55...
true
970,507,351
2,799
Loading JSON throws ArrowNotImplementedError
closed
[ "Hi @lewtun, thanks for reporting.\r\n\r\nApparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.\r\n\r\nI will investigate if there is a way to tell pyarrow not to try that timestamp casting.", "I think the issue is...
2021-08-13T15:31:48
2022-01-10T18:59:32
2022-01-10T18:59:32
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
lewtun
https://github.com/huggingface/datasets/issues/2799
null
false
970,493,126
2,798
Fix streaming zip files
closed
[ "Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ...
2021-08-13T15:17:01
2021-08-16T14:16:50
2021-08-13T15:38:28
Currently, streaming remote zip data files gives `FileNotFoundError` message: ```python data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) next(iter(ds)) ``` This PR fi...
albertvillanova
https://github.com/huggingface/datasets/pull/2798
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2798", "html_url": "https://github.com/huggingface/datasets/pull/2798", "diff_url": "https://github.com/huggingface/datasets/pull/2798.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2798.patch", "merged_at": "2021-08-13T15:38...
true
970,331,634
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
open
[]
2021-08-13T11:54:49
2021-08-14T08:42:09
null
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under git...
richarddwang
https://github.com/huggingface/datasets/issues/2797
null
false
970,235,846
2,796
add cedr dataset
closed
[ "> Hi ! Thanks a lot for adding this one :)\r\n> \r\n> Good job with the dataset card and the dataset script !\r\n> \r\n> I left a few suggestions\r\n\r\nThank you very much for your helpful suggestions. I have tried to carry them all out." ]
2021-08-13T09:37:35
2021-08-27T16:01:36
2021-08-27T16:01:36
null
naumov-al
https://github.com/huggingface/datasets/pull/2796
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2796", "html_url": "https://github.com/huggingface/datasets/pull/2796", "diff_url": "https://github.com/huggingface/datasets/pull/2796.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2796.patch", "merged_at": "2021-08-27T16:01...
true
969,728,545
2,794
Warnings and documentation about pickling incorrect
open
[]
2021-08-12T23:09:13
2021-08-12T23:09:31
null
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: ...
mbforbes
https://github.com/huggingface/datasets/issues/2794
null
false
968,967,773
2,793
Fix type hint for data_files
closed
[]
2021-08-12T14:42:37
2021-08-12T15:35:29
2021-08-12T15:35:29
Fix type hint for `data_files` in signatures and docstrings.
albertvillanova
https://github.com/huggingface/datasets/pull/2793
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2793", "html_url": "https://github.com/huggingface/datasets/pull/2793", "diff_url": "https://github.com/huggingface/datasets/pull/2793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2793.patch", "merged_at": "2021-08-12T15:35...
true
968,650,274
2,792
Update: GooAQ - add train/val/test splits
closed
[ "@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_l...
2021-08-12T11:40:18
2021-08-27T15:58:45
2021-08-27T15:58:14
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2792
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2792", "html_url": "https://github.com/huggingface/datasets/pull/2792", "diff_url": "https://github.com/huggingface/datasets/pull/2792.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2792.patch", "merged_at": "2021-08-27T15:58...
true
968,360,314
2,791
Fix typo in cnn_dailymail
closed
[]
2021-08-12T08:38:42
2021-08-12T11:17:59
2021-08-12T11:17:59
null
omaralsayed
https://github.com/huggingface/datasets/pull/2791
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2791", "html_url": "https://github.com/huggingface/datasets/pull/2791", "diff_url": "https://github.com/huggingface/datasets/pull/2791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2791.patch", "merged_at": "2021-08-12T11:17...
true
967,772,181
2,790
Fix typo in test_dataset_common
closed
[]
2021-08-12T01:10:29
2021-08-12T11:31:29
2021-08-12T11:31:29
null
nateraw
https://github.com/huggingface/datasets/pull/2790
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2790", "html_url": "https://github.com/huggingface/datasets/pull/2790", "diff_url": "https://github.com/huggingface/datasets/pull/2790.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2790.patch", "merged_at": "2021-08-12T11:31...
true
967,361,934
2,789
Updated dataset description of DaNE
closed
[ "Thanks for finishing it @albertvillanova " ]
2021-08-11T19:58:48
2021-08-12T16:10:59
2021-08-12T16:06:01
null
KennethEnevoldsen
https://github.com/huggingface/datasets/pull/2789
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2789", "html_url": "https://github.com/huggingface/datasets/pull/2789", "diff_url": "https://github.com/huggingface/datasets/pull/2789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2789.patch", "merged_at": "2021-08-12T16:06...
true
967,149,389
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
closed
[ "Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\...
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=[...
brijow
https://github.com/huggingface/datasets/issues/2788
null
false
967,018,406
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
closed
[ "the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)", "Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming fr...
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
jinec
https://github.com/huggingface/datasets/issues/2787
null
false
966,282,934
2,786
Support streaming compressed files
closed
[]
2021-08-11T09:02:06
2021-08-17T05:28:39
2021-08-16T06:36:19
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
albertvillanova
https://github.com/huggingface/datasets/pull/2786
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2786", "html_url": "https://github.com/huggingface/datasets/pull/2786", "diff_url": "https://github.com/huggingface/datasets/pull/2786.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2786.patch", "merged_at": "2021-08-16T06:36...
true
965,461,382
2,783
Add KS task to SUPERB
closed
[ "thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)", "@albertvillanova thanks! Everything should be ready now :)", "> The _background_noise_/_silence_ audio files are much longer t...
2021-08-10T22:14:07
2021-08-12T16:45:01
2021-08-11T20:19:17
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_comma...
anton-l
https://github.com/huggingface/datasets/pull/2783
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2783", "html_url": "https://github.com/huggingface/datasets/pull/2783", "diff_url": "https://github.com/huggingface/datasets/pull/2783.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2783.patch", "merged_at": "2021-08-11T20:19...
true
964,858,439
2,782
Fix renaming of corpus_bleu args
closed
[]
2021-08-10T11:02:34
2021-08-10T11:16:07
2021-08-10T11:16:07
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, s...
albertvillanova
https://github.com/huggingface/datasets/pull/2782
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2782", "html_url": "https://github.com/huggingface/datasets/pull/2782", "diff_url": "https://github.com/huggingface/datasets/pull/2782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2782.patch", "merged_at": "2021-08-10T11:16...
true
964,805,351
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
closed
[]
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #273...
albertvillanova
https://github.com/huggingface/datasets/issues/2781
null
false
964,794,764
2,780
VIVOS dataset for Vietnamese ASR
closed
[]
2021-08-10T09:47:36
2021-08-12T11:09:30
2021-08-12T11:09:30
null
binh234
https://github.com/huggingface/datasets/pull/2780
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2780", "html_url": "https://github.com/huggingface/datasets/pull/2780", "diff_url": "https://github.com/huggingface/datasets/pull/2780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2780.patch", "merged_at": "2021-08-12T11:09...
true
964,775,085
2,779
Fix sacrebleu tokenizers
closed
[]
2021-08-10T09:24:27
2021-08-10T11:03:08
2021-08-10T10:57:54
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()...
albertvillanova
https://github.com/huggingface/datasets/pull/2779
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2779", "html_url": "https://github.com/huggingface/datasets/pull/2779", "diff_url": "https://github.com/huggingface/datasets/pull/2779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2779.patch", "merged_at": "2021-08-10T10:57...
true
964,737,422
2,778
Do not pass tokenize to sacrebleu
closed
[]
2021-08-10T08:40:37
2021-08-10T10:03:37
2021-08-10T10:03:37
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will ...
albertvillanova
https://github.com/huggingface/datasets/pull/2778
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2778", "html_url": "https://github.com/huggingface/datasets/pull/2778", "diff_url": "https://github.com/huggingface/datasets/pull/2778.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2778.patch", "merged_at": "2021-08-10T10:03...
true
964,696,380
2,777
Use packaging to handle versions
closed
[]
2021-08-10T07:51:39
2021-08-18T13:56:27
2021-08-18T13:56:27
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
albertvillanova
https://github.com/huggingface/datasets/pull/2777
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2777", "html_url": "https://github.com/huggingface/datasets/pull/2777", "diff_url": "https://github.com/huggingface/datasets/pull/2777.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2777.patch", "merged_at": "2021-08-18T13:56...
true
964,400,596
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
open
[]
2021-08-09T21:23:17
2021-08-09T21:23:17
null
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-th...
stas00
https://github.com/huggingface/datasets/issues/2776
null
false
964,303,626
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
closed
[ "I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo", "Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RN...
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se...
mbforbes
https://github.com/huggingface/datasets/issues/2775
null
false
963,932,199
2,774
Prevent .map from using multiprocessing when loading from cache
closed
[ "I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ", "Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r...
2021-08-09T12:11:38
2021-09-09T10:20:28
2021-09-09T10:20:28
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load fr...
thomasw21
https://github.com/huggingface/datasets/pull/2774
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2774", "html_url": "https://github.com/huggingface/datasets/pull/2774", "diff_url": "https://github.com/huggingface/datasets/pull/2774.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2774.patch", "merged_at": "2021-09-09T10:20...
true
963,730,497
2,773
Remove dataset_infos.json
closed
[ "This was closed by:\r\n- #4926" ]
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_byt...
albertvillanova
https://github.com/huggingface/datasets/issues/2773
null
false
963,348,834
2,772
Remove returned feature constrain
open
[]
2021-08-08T04:01:30
2021-08-08T08:48:01
null
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score...
PosoSAgapo
https://github.com/huggingface/datasets/issues/2772
null
false
963,257,036
2,771
[WIP][Common Voice 7] Add common voice 7.0
closed
[ "Hi ! I think the name `common_voice_7` is fine :)\r\nMoreover if the dataset_infos.json is missing I'm pretty sure you don't need to specify `ignore_verifications=True`", "Hi, how about to add a new parameter \"version\" in the function load_dataset, something like: \r\n`load_dataset(\"common_voice\", \"lg\", ve...
2021-08-07T16:01:10
2021-12-06T23:24:02
2021-12-06T23:24:02
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need t...
patrickvonplaten
https://github.com/huggingface/datasets/pull/2771
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2771", "html_url": "https://github.com/huggingface/datasets/pull/2771", "diff_url": "https://github.com/huggingface/datasets/pull/2771.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2771.patch", "merged_at": null }
true
963,246,512
2,770
Add support for fast tokenizer in BertScore
closed
[]
2021-08-07T15:00:03
2021-08-09T12:34:43
2021-08-09T11:16:25
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
mariosasko
https://github.com/huggingface/datasets/pull/2770
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2770", "html_url": "https://github.com/huggingface/datasets/pull/2770", "diff_url": "https://github.com/huggingface/datasets/pull/2770.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2770.patch", "merged_at": "2021-08-09T11:16...
true
963,240,802
2,769
Allow PyArrow from source
closed
[]
2021-08-07T14:26:44
2021-08-09T15:38:39
2021-08-09T15:38:39
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
patrickvonplaten
https://github.com/huggingface/datasets/pull/2769
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2769", "html_url": "https://github.com/huggingface/datasets/pull/2769", "diff_url": "https://github.com/huggingface/datasets/pull/2769.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2769.patch", "merged_at": "2021-08-09T15:38...
true
963,229,173
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
closed
[ "Hi,\r\n\r\nthe `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds =...
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets im...
lvwerra
https://github.com/huggingface/datasets/issues/2768
null
false
963,002,120
2,767
equal operation to perform unbatch for huggingface datasets
closed
[ "Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can ...
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
dorooddorood606
https://github.com/huggingface/datasets/issues/2767
null
false
962,994,198
2,766
fix typo (ShuffingConfig -> ShufflingConfig)
closed
[]
2021-08-06T19:31:40
2021-08-10T14:17:03
2021-08-10T14:17:02
pretty straightforward, it should be Shuffling instead of Shuffing
daleevans
https://github.com/huggingface/datasets/pull/2766
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2766", "html_url": "https://github.com/huggingface/datasets/pull/2766", "diff_url": "https://github.com/huggingface/datasets/pull/2766.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2766.patch", "merged_at": "2021-08-10T14:17...
true