id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
โŒ€
body
stringlengths
0
228k
โŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
962,861,395
2,765
BERTScore Error
closed
[ "Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\...
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ...
gagan3012
https://github.com/huggingface/datasets/issues/2765
null
false
962,554,799
2,764
Add DER metric for SUPERB speaker diarization task
closed
[ "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
2021-08-06T09:12:36
2023-07-11T09:35:23
2023-07-11T09:35:23
null
albertvillanova
https://github.com/huggingface/datasets/pull/2764
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2764", "html_url": "https://github.com/huggingface/datasets/pull/2764", "diff_url": "https://github.com/huggingface/datasets/pull/2764.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2764.patch", "merged_at": null }
true
961,895,523
2,763
English wikipedia datasets is not clean
closed
[ "Hi ! Certain users might need these data (for training or simply to explore/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training" ]
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.e...
lucadiliello
https://github.com/huggingface/datasets/issues/2763
null
false
961,652,046
2,762
Add RVL-CDIP dataset
closed
[ "cc @nateraw ", "#self-assign", "[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. Thatโ€™s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) r...
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The image...
NielsRogge
https://github.com/huggingface/datasets/issues/2762
null
false
961,568,287
2,761
Error loading C4 realnewslike dataset
closed
[ "Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.", "@bhavitvyamalik @lhoestq ...
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ...
danshirron
https://github.com/huggingface/datasets/issues/2761
null
false
961,372,667
2,760
Add Nuswide dataset
open
[]
2021-08-05T03:00:41
2021-12-08T12:06:23
null
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-c...
shivangibithel
https://github.com/huggingface/datasets/issues/2760
null
false
960,206,575
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
closed
[]
2021-08-04T10:19:55
2021-08-04T11:36:30
2021-08-04T11:36:30
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/2758
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2758", "html_url": "https://github.com/huggingface/datasets/pull/2758", "diff_url": "https://github.com/huggingface/datasets/pull/2758.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2758.patch", "merged_at": "2021-08-04T11:36...
true
959,984,081
2,757
Unexpected type after `concatenate_datasets`
closed
[ "Hi @JulesBelveze, thanks for your question.\r\n\r\nNote that ๐Ÿค— `datasets` internally store their data in Apache Arrow format.\r\n\r\nHowever, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).\r\n\r\nIf you would like their columns to be returned in a more...
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everythi...
JulesBelveze
https://github.com/huggingface/datasets/issues/2757
null
false
959,255,646
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
closed
[]
2021-08-03T15:48:59
2021-08-04T09:43:25
2021-08-04T09:43:25
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2756
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2756", "html_url": "https://github.com/huggingface/datasets/pull/2756", "diff_url": "https://github.com/huggingface/datasets/pull/2756.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2756.patch", "merged_at": "2021-08-04T09:43...
true
959,115,888
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
closed
[]
2021-08-03T13:25:44
2021-08-04T09:06:54
2021-08-04T09:06:53
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2755
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2755", "html_url": "https://github.com/huggingface/datasets/pull/2755", "diff_url": "https://github.com/huggingface/datasets/pull/2755.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2755.patch", "merged_at": "2021-08-04T09:06...
true
959,105,577
2,754
Generate metadata JSON for telugu_books dataset
closed
[]
2021-08-03T13:14:52
2021-08-04T08:49:02
2021-08-04T08:49:02
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2754
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2754", "html_url": "https://github.com/huggingface/datasets/pull/2754", "diff_url": "https://github.com/huggingface/datasets/pull/2754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2754.patch", "merged_at": "2021-08-04T08:49...
true
959,036,995
2,753
Generate metadata JSON for reclor dataset
closed
[]
2021-08-03T11:52:29
2021-08-04T08:07:15
2021-08-04T08:07:15
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2753
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2753", "html_url": "https://github.com/huggingface/datasets/pull/2753", "diff_url": "https://github.com/huggingface/datasets/pull/2753.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2753.patch", "merged_at": "2021-08-04T08:07...
true
959,023,608
2,752
Generate metadata JSON for lm1b dataset
closed
[]
2021-08-03T11:34:56
2021-08-04T06:40:40
2021-08-04T06:40:39
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2752
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2752", "html_url": "https://github.com/huggingface/datasets/pull/2752", "diff_url": "https://github.com/huggingface/datasets/pull/2752.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2752.patch", "merged_at": "2021-08-04T06:40...
true
959,021,262
2,751
Update metadata for wikihow dataset
closed
[]
2021-08-03T11:31:57
2021-08-03T15:52:09
2021-08-03T15:52:09
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
albertvillanova
https://github.com/huggingface/datasets/pull/2751
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2751", "html_url": "https://github.com/huggingface/datasets/pull/2751", "diff_url": "https://github.com/huggingface/datasets/pull/2751.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2751.patch", "merged_at": "2021-08-03T15:52...
true
958,984,730
2,750
Second concatenation of datasets produces errors
closed
[ "@albertvillanova ", "Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.", "Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?", "Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. ๐Ÿ˜… \r\n\r\nIn the meantime, ...
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets d...
Aktsvigun
https://github.com/huggingface/datasets/issues/2750
null
false
958,968,748
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
closed
[ "Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requir...
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = ...
severo
https://github.com/huggingface/datasets/issues/2749
null
false
958,889,041
2,748
Generate metadata JSON for wikihow dataset
closed
[]
2021-08-03T08:55:40
2021-08-03T10:17:51
2021-08-03T10:17:51
Related to #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2748
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2748", "html_url": "https://github.com/huggingface/datasets/pull/2748", "diff_url": "https://github.com/huggingface/datasets/pull/2748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2748.patch", "merged_at": "2021-08-03T10:17...
true
958,867,627
2,747
add multi-proc in `to_json`
closed
[ "Thank you for working on this, @bhavitvyamalik \r\n\r\n10% is not solving the issue, we want 5-10x faster on a machine that has lots of resources, but limited processing time.\r\n\r\nSo let's benchmark it on an instance with many more cores, I can test with 12 on my dev box and 40 on JZ. \r\n\r\nCould you please s...
2021-08-03T08:30:13
2021-10-19T18:24:21
2021-09-13T13:56:37
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a si...
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2747
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2747", "html_url": "https://github.com/huggingface/datasets/pull/2747", "diff_url": "https://github.com/huggingface/datasets/pull/2747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2747.patch", "merged_at": "2021-09-13T13:56...
true
958,551,619
2,746
Cannot load `few-nerd` dataset
closed
[ "Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure th...
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users...
Mehrad0711
https://github.com/huggingface/datasets/issues/2746
null
false
958,269,579
2,745
added semeval18_emotion_classification dataset
closed
[ "For training the multilabel classifier, I would combine the labels into a list, for example for the English dataset:\r\n\r\n```\r\ndfpre=pd.read_csv(path+\"2018-E-c-En-train.txt\",sep=\"\\t\")\r\ndfpre['list'] = dfpre[dfpre.columns[2:]].values.tolist()\r\ndf = dfpre[['Tweet', 'list']].copy()\r\ndf.rename(columns={...
2021-08-02T15:39:55
2021-10-29T09:22:05
2021-09-21T09:48:35
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ...
maxpel
https://github.com/huggingface/datasets/pull/2745
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2745", "html_url": "https://github.com/huggingface/datasets/pull/2745", "diff_url": "https://github.com/huggingface/datasets/pull/2745.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2745.patch", "merged_at": "2021-09-21T09:48...
true
958,146,637
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
closed
[]
2021-08-02T13:27:53
2021-08-03T09:25:34
2021-08-03T09:25:33
Close #2743.
albertvillanova
https://github.com/huggingface/datasets/pull/2744
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2744", "html_url": "https://github.com/huggingface/datasets/pull/2744", "diff_url": "https://github.com/huggingface/datasets/pull/2744.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2744.patch", "merged_at": "2021-08-03T09:25...
true
958,119,251
2,743
Dataset JSON is incorrect
closed
[ "As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:\r\n> Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...\r\nIn the meanwhile, in orde...
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset...
severo
https://github.com/huggingface/datasets/issues/2743
null
false
958,114,064
2,742
Improve detection of streamable file types
closed
[ "maybe we should rather attempt to download a `Range` from the server and see if it works?" ]
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownl...
severo
https://github.com/huggingface/datasets/issues/2742
null
false
957,979,559
2,741
Add Hypersim dataset
open
[]
2021-08-02T10:06:50
2021-12-08T12:06:51
null
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/hugg...
osanseviero
https://github.com/huggingface/datasets/issues/2741
null
false
957,911,035
2,740
Update release instructions
closed
[]
2021-08-02T08:46:00
2021-08-02T14:39:56
2021-08-02T14:39:56
Update release instructions.
albertvillanova
https://github.com/huggingface/datasets/pull/2740
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2740", "html_url": "https://github.com/huggingface/datasets/pull/2740", "diff_url": "https://github.com/huggingface/datasets/pull/2740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2740.patch", "merged_at": "2021-08-02T14:39...
true
957,751,260
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
closed
[]
2021-08-02T05:09:05
2021-08-03T04:23:37
2021-08-03T04:23:37
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (a...
albertvillanova
https://github.com/huggingface/datasets/pull/2739
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2739", "html_url": "https://github.com/huggingface/datasets/pull/2739", "diff_url": "https://github.com/huggingface/datasets/pull/2739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2739.patch", "merged_at": "2021-08-03T04:23...
true
957,517,746
2,738
Sunbird AI Ugandan low resource language dataset
closed
[ "Hi @ak3ra , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)", "@lhoestq Working on this, thanks for the detailed review :) ", "Hi ! Cool thanks :)\r\nFeel free to merge master into your branch to fix the CI issues\r\n\r\nLet me know if you ...
2021-08-01T15:18:00
2022-10-03T09:37:30
2022-10-03T09:37:30
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
ak3ra
https://github.com/huggingface/datasets/pull/2738
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2738", "html_url": "https://github.com/huggingface/datasets/pull/2738", "diff_url": "https://github.com/huggingface/datasets/pull/2738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2738.patch", "merged_at": null }
true
957,124,881
2,737
SacreBLEU update
closed
[ "Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of t...
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
devrimcavusoglu
https://github.com/huggingface/datasets/issues/2737
null
false
956,895,199
2,736
Add Microsoft Building Footprints dataset
open
[ "Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!" ]
2021-07-30T16:17:08
2021-12-08T12:09:03
null
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - *...
albertvillanova
https://github.com/huggingface/datasets/issues/2736
null
false
956,889,365
2,735
Add Open Buildings dataset
open
[]
2021-07-30T16:08:39
2021-07-31T05:01:25
null
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science....
albertvillanova
https://github.com/huggingface/datasets/issues/2735
null
false
956,844,874
2,734
Update BibTeX entry
closed
[]
2021-07-30T15:22:51
2021-07-30T15:47:58
2021-07-30T15:47:58
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2734
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2734", "html_url": "https://github.com/huggingface/datasets/pull/2734", "diff_url": "https://github.com/huggingface/datasets/pull/2734.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2734.patch", "merged_at": "2021-07-30T15:47...
true
956,725,476
2,733
Add missing parquet known extension
closed
[]
2021-07-30T13:01:20
2021-07-30T13:24:31
2021-07-30T13:24:30
This code was failing because the parquet extension wasn't recognized: ```python from datasets import load_dataset base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" data_files = {"train": base_url + "wikipedia-train.parquet"} wiki = load_dataset("parquet", da...
lhoestq
https://github.com/huggingface/datasets/pull/2733
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2733", "html_url": "https://github.com/huggingface/datasets/pull/2733", "diff_url": "https://github.com/huggingface/datasets/pull/2733.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2733.patch", "merged_at": "2021-07-30T13:24...
true
956,676,360
2,732
Updated TTC4900 Dataset
closed
[ "@lhoestq, lรผtfen bu PR'ฤฑ gรถzden geรงirebilir misiniz?", "> Thanks ! This looks all good now :)\r\n\r\nThanks" ]
2021-07-30T11:52:14
2021-07-30T16:00:51
2021-07-30T15:58:14
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/2732
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2732", "html_url": "https://github.com/huggingface/datasets/pull/2732", "diff_url": "https://github.com/huggingface/datasets/pull/2732.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2732.patch", "merged_at": "2021-07-30T15:58...
true
956,087,452
2,731
Adding to_tf_dataset method
closed
[ "This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the...
2021-07-29T18:10:25
2021-09-16T13:50:54
2021-09-16T13:50:54
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the wh...
Rocketknight1
https://github.com/huggingface/datasets/pull/2731
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2731", "html_url": "https://github.com/huggingface/datasets/pull/2731", "diff_url": "https://github.com/huggingface/datasets/pull/2731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2731.patch", "merged_at": "2021-09-16T13:50...
true
955,987,834
2,730
Update CommonVoice with new release
open
[ "cc @patrickvonplaten?", "Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n", "Also see...
2021-07-29T15:59:59
2021-08-07T16:19:19
null
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8...
yjernite
https://github.com/huggingface/datasets/issues/2730
null
false
955,920,489
2,729
Fix IndexError while loading Arabic Billion Words dataset
closed
[]
2021-07-29T14:47:02
2021-07-30T13:03:55
2021-07-30T13:03:55
Catch `IndexError` and ignore that record. Close #2727.
albertvillanova
https://github.com/huggingface/datasets/pull/2729
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2729", "html_url": "https://github.com/huggingface/datasets/pull/2729", "diff_url": "https://github.com/huggingface/datasets/pull/2729.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2729.patch", "merged_at": "2021-07-30T13:03...
true
955,892,970
2,728
Concurrent use of same dataset (already downloaded)
open
[ "Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.", "If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in <module>\r\n train_loader, val_loade...
2021-07-29T14:18:38
2021-08-02T07:25:57
null
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
PierreColombo
https://github.com/huggingface/datasets/issues/2728
null
false
955,812,149
2,727
Error in loading the Arabic Billion Words Corpus
closed
[ "I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\...
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results Th...
M-Salti
https://github.com/huggingface/datasets/issues/2727
null
false
955,674,388
2,726
Typo fix `tokenize_exemple`
closed
[]
2021-07-29T10:03:37
2021-07-29T12:00:25
2021-07-29T12:00:25
There is a small typo in the main README.md
shabie
https://github.com/huggingface/datasets/pull/2726
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2726", "html_url": "https://github.com/huggingface/datasets/pull/2726", "diff_url": "https://github.com/huggingface/datasets/pull/2726.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2726.patch", "merged_at": "2021-07-29T12:00...
true
955,020,776
2,725
Pass use_auth_token to request_etags
closed
[]
2021-07-28T16:13:29
2021-07-28T16:38:02
2021-07-28T16:38:02
Fix #2724.
albertvillanova
https://github.com/huggingface/datasets/pull/2725
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2725", "html_url": "https://github.com/huggingface/datasets/pull/2725", "diff_url": "https://github.com/huggingface/datasets/pull/2725.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2725.patch", "merged_at": "2021-07-28T16:38...
true
954,919,607
2,724
404 Error when loading remote data files from private repo
closed
[ "I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160", "Yes, I remember having properly implemented that: \r...
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not...
albertvillanova
https://github.com/huggingface/datasets/issues/2724
null
false
954,864,104
2,723
Fix en subset by modifying dataset_info with correct validation infos
closed
[]
2021-07-28T13:36:19
2021-07-28T15:22:23
2021-07-28T15:22:23
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "vali...
thomasw21
https://github.com/huggingface/datasets/pull/2723
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2723", "html_url": "https://github.com/huggingface/datasets/pull/2723", "diff_url": "https://github.com/huggingface/datasets/pull/2723.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2723.patch", "merged_at": "2021-07-28T15:22...
true
954,446,053
2,722
Missing cache file
closed
[ "This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.", "Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset" ]
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json...
PosoSAgapo
https://github.com/huggingface/datasets/issues/2722
null
false
954,238,230
2,721
Deal with the bad check in test_load.py
closed
[ "Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps://github.com/huggingface/datasets/blob/00686c46b7aaf6bfcd4102cec300a3c031284a5a/tests/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for...
2021-07-27T20:23:23
2021-07-28T09:58:34
2021-07-28T08:53:18
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with: ```python m_paths = re.findall(r"\S*_dummy/_dummy.py\b...
mariosasko
https://github.com/huggingface/datasets/pull/2721
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2721", "html_url": "https://github.com/huggingface/datasets/pull/2721", "diff_url": "https://github.com/huggingface/datasets/pull/2721.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2721.patch", "merged_at": "2021-07-28T08:53...
true
954,024,426
2,720
fix: ๐Ÿ› fix two typos
closed
[]
2021-07-27T15:50:17
2021-07-27T18:38:17
2021-07-27T18:38:16
severo
https://github.com/huggingface/datasets/pull/2720
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2720", "html_url": "https://github.com/huggingface/datasets/pull/2720", "diff_url": "https://github.com/huggingface/datasets/pull/2720.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2720.patch", "merged_at": "2021-07-27T18:38...
true
953,932,416
2,719
Use ETag in streaming mode to detect resource updates
open
[]
2021-07-27T14:17:09
2021-10-22T09:36:08
null
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd lik...
severo
https://github.com/huggingface/datasets/issues/2719
null
false
953,360,663
2,718
New documentation structure
closed
[ "I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)", "I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in...
2021-07-26T23:15:13
2021-09-13T17:20:53
2021-09-13T17:20:52
Organize Datasets documentation into four documentation types to improve clarity and discoverability of content. **Content to add in the very short term (feel free to add anything I'm missing):** - A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also b...
stevhliu
https://github.com/huggingface/datasets/pull/2718
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2718", "html_url": "https://github.com/huggingface/datasets/pull/2718", "diff_url": "https://github.com/huggingface/datasets/pull/2718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2718.patch", "merged_at": "2021-09-13T17:20...
true
952,979,976
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
closed
[]
2021-07-26T14:42:22
2021-07-26T18:04:14
2021-07-26T16:30:06
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
amankhandelia
https://github.com/huggingface/datasets/pull/2717
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2717", "html_url": "https://github.com/huggingface/datasets/pull/2717", "diff_url": "https://github.com/huggingface/datasets/pull/2717.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2717.patch", "merged_at": "2021-07-26T16:30...
true
952,902,778
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
closed
[ "Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)", "Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)", "Fixed by #2717." ]
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h...
amankhandelia
https://github.com/huggingface/datasets/issues/2716
null
false
952,845,229
2,715
Update PAN-X data URL in XTREME dataset
closed
[ "Merging since the CI is just about missing infos in the dataset card" ]
2021-07-26T12:21:17
2021-07-26T13:27:59
2021-07-26T13:27:59
Related to #2710, #2691.
albertvillanova
https://github.com/huggingface/datasets/pull/2715
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2715", "html_url": "https://github.com/huggingface/datasets/pull/2715", "diff_url": "https://github.com/huggingface/datasets/pull/2715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2715.patch", "merged_at": "2021-07-26T13:27...
true
952,580,820
2,714
add more precise information for size
open
[ "We already have this information in the dataset_infos.json files of each dataset.\r\nMaybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets\r\n\r\nFor now if you want to access this info you have to load the json for each dataset. For example:\r\n- fo...
2021-07-26T07:11:03
2021-07-26T09:16:25
null
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
pennyl67
https://github.com/huggingface/datasets/issues/2714
null
false
952,515,256
2,713
Enumerate all ner_tags values in WNUT 17 dataset
closed
[]
2021-07-26T05:22:16
2021-07-26T09:30:55
2021-07-26T09:30:55
This PR does: - Enumerate all ner_tags in dataset card Data Fields section - Add all metadata tags to dataset card Close #2709.
albertvillanova
https://github.com/huggingface/datasets/pull/2713
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2713", "html_url": "https://github.com/huggingface/datasets/pull/2713", "diff_url": "https://github.com/huggingface/datasets/pull/2713.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2713.patch", "merged_at": "2021-07-26T09:30...
true
951,723,326
2,710
Update WikiANN data URL
closed
[ "We have to update the URL in the XTREME benchmark as well:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0dfc639cec450ed8762a997789a2ed63e63cdcf2/datasets/xtreme/xtreme.py#L411-L411\r\n\r\n" ]
2021-07-23T16:29:21
2021-07-26T09:34:23
2021-07-26T09:34:23
WikiANN data source URL is no longer accessible: 404 error from Dropbox. We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card. Close #2691.
albertvillanova
https://github.com/huggingface/datasets/pull/2710
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2710", "html_url": "https://github.com/huggingface/datasets/pull/2710", "diff_url": "https://github.com/huggingface/datasets/pull/2710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2710.patch", "merged_at": "2021-07-26T09:34...
true
951,534,757
2,709
Missing documentation for wnut_17 (ner_tags)
closed
[ "Hi @maxpel, thanks for reporting this issue.\r\n\r\nIndeed, the documentation in the dataset card is not complete. Iโ€™m opening a Pull Request to fix it.\r\n\r\nAs the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and ...
2021-07-23T12:25:32
2021-07-26T09:30:55
2021-07-26T09:30:55
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` ...
maxpel
https://github.com/huggingface/datasets/issues/2709
null
false
951,092,660
2,708
QASC: incomplete training set
closed
[ "Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_quest...
2021-07-22T21:59:44
2021-07-23T13:30:07
2021-07-23T13:30:07
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instanc...
danyaljj
https://github.com/huggingface/datasets/issues/2708
null
false
950,812,945
2,707
404 Not Found Error when loading LAMA dataset
closed
[ "Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :)", "Hi @dwil2444, thanks for reporting.\r\n\r\nCould you please confirm which `datasets` version you were using and if the problem persists after you update it to the ...
2021-07-22T15:52:33
2021-07-26T14:29:07
2021-07-26T14:29:07
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ...
dwil2444
https://github.com/huggingface/datasets/issues/2707
null
false
950,606,561
2,706
Update BibTeX entry
closed
[]
2021-07-22T12:29:29
2021-07-22T12:43:00
2021-07-22T12:43:00
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2706
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2706", "html_url": "https://github.com/huggingface/datasets/pull/2706", "diff_url": "https://github.com/huggingface/datasets/pull/2706.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2706.patch", "merged_at": "2021-07-22T12:43...
true
950,488,583
2,705
404 not found error on loading WIKIANN dataset
closed
[ "Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contac...
2021-07-22T09:55:50
2021-07-23T08:07:32
2021-07-23T08:07:32
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Act...
ronbutan
https://github.com/huggingface/datasets/issues/2705
null
false
950,483,980
2,704
Fix pick default config name message
closed
[]
2021-07-22T09:49:43
2021-07-22T10:02:41
2021-07-22T10:02:40
The error message to tell which config name to load is not displayed. This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659 I fixed that by ma...
lhoestq
https://github.com/huggingface/datasets/pull/2704
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2704", "html_url": "https://github.com/huggingface/datasets/pull/2704", "diff_url": "https://github.com/huggingface/datasets/pull/2704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2704.patch", "merged_at": "2021-07-22T10:02...
true
950,482,284
2,703
Bad message when config name is missing
closed
[]
2021-07-22T09:47:23
2021-07-22T10:02:40
2021-07-22T10:02:40
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'Bui...
lhoestq
https://github.com/huggingface/datasets/issues/2703
null
false
950,448,159
2,702
Update BibTeX entry
closed
[]
2021-07-22T09:04:39
2021-07-22T09:17:39
2021-07-22T09:17:38
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2702
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2702", "html_url": "https://github.com/huggingface/datasets/pull/2702", "diff_url": "https://github.com/huggingface/datasets/pull/2702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2702.patch", "merged_at": "2021-07-22T09:17...
true
950,422,403
2,701
Fix download_mode docstrings
closed
[]
2021-07-22T08:30:25
2021-07-22T09:33:31
2021-07-22T09:33:31
Fix `download_mode` docstrings.
albertvillanova
https://github.com/huggingface/datasets/pull/2701
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2701", "html_url": "https://github.com/huggingface/datasets/pull/2701", "diff_url": "https://github.com/huggingface/datasets/pull/2701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2701.patch", "merged_at": "2021-07-22T09:33...
true
950,276,325
2,700
from datasets import Dataset is failing
closed
[ "Hi @kswamy15, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`" ]
2021-07-22T03:51:23
2021-07-22T07:23:45
2021-07-22T07:09:07
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or...
kswamy15
https://github.com/huggingface/datasets/issues/2700
null
false
950,221,226
2,699
cannot combine splits merging and streaming?
open
[ "Hi ! That's missing indeed. We'll try to implement this for the next version :)\r\n\r\nI guess we just need to implement #2564 first, and then we should be able to add support for splits combinations", "is there an update on this? ran into the same issue on 2.17.1.\r\n\r\nOn a similar note, the keyword `split=\"...
2021-07-22T01:13:25
2024-04-08T13:26:46
null
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_d...
eyaler
https://github.com/huggingface/datasets/issues/2699
null
false
950,159,867
2,698
Ignore empty batch when writing
closed
[]
2021-07-21T22:35:30
2021-07-26T14:56:03
2021-07-26T13:25:26
This prevents an schema update with unknown column types, as reported in #2644. This is my first attempt at fixing the issue. I tested the following: - First batch returned by a batched map operation is empty. - An intermediate batch is empty. - `python -m unittest tests.test_arrow_writer` passes. However, `ar...
pcuenca
https://github.com/huggingface/datasets/pull/2698
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2698", "html_url": "https://github.com/huggingface/datasets/pull/2698", "diff_url": "https://github.com/huggingface/datasets/pull/2698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2698.patch", "merged_at": "2021-07-26T13:25...
true
950,021,623
2,697
Fix import on Colab
closed
[ "@lhoestq @albertvillanova - It might be a good idea to have a patch release after this gets merged (presumably tomorrow morning when you're around). The Colab issue linked to this PR is a pretty big blocker. " ]
2021-07-21T19:03:38
2021-07-22T07:09:08
2021-07-22T07:09:07
Fix #2695, fix #2700.
nateraw
https://github.com/huggingface/datasets/pull/2697
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2697", "html_url": "https://github.com/huggingface/datasets/pull/2697", "diff_url": "https://github.com/huggingface/datasets/pull/2697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2697.patch", "merged_at": "2021-07-22T07:09...
true
949,901,726
2,696
Add support for disable_progress_bar on Windows
closed
[ "The CI failure seems unrelated to this PR (probably has something to do with Transformers)." ]
2021-07-21T16:34:53
2021-07-26T13:31:14
2021-07-26T09:38:37
This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would ...
mariosasko
https://github.com/huggingface/datasets/pull/2696
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2696", "html_url": "https://github.com/huggingface/datasets/pull/2696", "diff_url": "https://github.com/huggingface/datasets/pull/2696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2696.patch", "merged_at": "2021-07-26T09:38...
true
949,864,823
2,695
Cannot import load_dataset on Colab
closed
[ "I'm facing the same issue on Colab today too.\r\n\r\n```\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-4-5833ac0f5437> in <module>()\r\n 3 \r\n 4 from ray import tune\r\n----> 5 from datasets import DatasetDict, Dataset\r\n 6 from datasets import lo...
2021-07-21T15:52:51
2021-07-22T07:26:25
2021-07-22T07:09:07
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
bayartsogt-ya
https://github.com/huggingface/datasets/issues/2695
null
false
949,844,722
2,694
fix: ๐Ÿ› change string format to allow copy/paste to work in bash
closed
[]
2021-07-21T15:30:40
2021-07-22T10:41:47
2021-07-22T10:41:47
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
severo
https://github.com/huggingface/datasets/pull/2694
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2694", "html_url": "https://github.com/huggingface/datasets/pull/2694", "diff_url": "https://github.com/huggingface/datasets/pull/2694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2694.patch", "merged_at": "2021-07-22T10:41...
true
949,797,014
2,693
Fix OSCAR Esperanto
closed
[]
2021-07-21T14:43:50
2021-07-21T14:53:52
2021-07-21T14:53:51
The Esperanto part (original) of OSCAR has the wrong number of examples: ```python from datasets import load_dataset raw_datasets = load_dataset("oscar", "unshuffled_original_eo") ``` raises ```python NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, da...
lhoestq
https://github.com/huggingface/datasets/pull/2693
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2693", "html_url": "https://github.com/huggingface/datasets/pull/2693", "diff_url": "https://github.com/huggingface/datasets/pull/2693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2693.patch", "merged_at": "2021-07-21T14:53...
true
949,765,484
2,692
Update BibTeX entry
closed
[]
2021-07-21T14:23:35
2021-07-21T15:31:41
2021-07-21T15:31:40
Update BibTeX entry
albertvillanova
https://github.com/huggingface/datasets/pull/2692
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2692", "html_url": "https://github.com/huggingface/datasets/pull/2692", "diff_url": "https://github.com/huggingface/datasets/pull/2692.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2692.patch", "merged_at": "2021-07-21T15:31...
true
949,758,379
2,691
xtreme / pan-x cannot be downloaded
closed
[ "Hi @severo, thanks for reporting.\r\n\r\nHowever I have not been able to reproduce this issue. Could you please confirm if the problem persists for you?\r\n\r\nMaybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried.", "Hmmm, the file (https://www.dropbox.com/s/dl/12h3qqog6q4bj...
2021-07-21T14:18:05
2021-07-26T09:34:22
2021-07-26T09:34:22
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
severo
https://github.com/huggingface/datasets/issues/2691
null
false
949,574,500
2,690
Docs details
closed
[ "Thanks for all the comments and for the corrections in the docs !\r\n\r\nAbout all the points you mentioned:\r\n\r\n> * the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch ...
2021-07-21T10:43:14
2021-07-27T18:40:54
2021-07-27T18:40:54
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + ...
severo
https://github.com/huggingface/datasets/pull/2690
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2690", "html_url": "https://github.com/huggingface/datasets/pull/2690", "diff_url": "https://github.com/huggingface/datasets/pull/2690.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2690.patch", "merged_at": "2021-07-27T18:40...
true
949,447,104
2,689
cannot save the dataset to disk after rename_column
closed
[ "Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the othe...
2021-07-21T08:13:40
2025-02-11T23:23:17
2021-07-21T13:11:04
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]})...
PaulLerner
https://github.com/huggingface/datasets/issues/2689
null
false
949,182,074
2,688
hebrew language codes he and iw should be treated as aliases
closed
[ "Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datas...
2021-07-20T23:13:52
2021-07-21T16:34:53
2021-07-21T16:34:53
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
eyaler
https://github.com/huggingface/datasets/issues/2688
null
false
948,890,481
2,687
Minor documentation fix
closed
[]
2021-07-20T17:43:23
2021-07-21T13:04:55
2021-07-21T13:04:55
Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. Thi...
slowwavesleep
https://github.com/huggingface/datasets/pull/2687
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2687", "html_url": "https://github.com/huggingface/datasets/pull/2687", "diff_url": "https://github.com/huggingface/datasets/pull/2687.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2687.patch", "merged_at": "2021-07-21T13:04...
true
948,811,669
2,686
Fix bad config ids that name cache directories
closed
[]
2021-07-20T16:00:45
2021-07-20T16:27:15
2021-07-20T16:27:15
`data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded. Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users. I fixed this by ignoring the value of `data_dir` when it's `None` when computing...
lhoestq
https://github.com/huggingface/datasets/pull/2686
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2686", "html_url": "https://github.com/huggingface/datasets/pull/2686", "diff_url": "https://github.com/huggingface/datasets/pull/2686.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2686.patch", "merged_at": "2021-07-20T16:27...
true
948,791,572
2,685
Fix Blog Authorship Corpus dataset
closed
[ "Normally, I'm expecting errors from the validation of the README file... ๐Ÿ˜… ", "That is:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[blog_authorship_corpus]\r\n==== 1 failed, 3182 passed, 2763 skipped,...
2021-07-20T15:44:50
2021-07-21T13:11:58
2021-07-21T13:11:58
This PR: - Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError` - Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files Close #2679.
albertvillanova
https://github.com/huggingface/datasets/pull/2685
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2685", "html_url": "https://github.com/huggingface/datasets/pull/2685", "diff_url": "https://github.com/huggingface/datasets/pull/2685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2685.patch", "merged_at": "2021-07-21T13:11...
true
948,771,753
2,684
Print absolute local paths in load_dataset error messages
closed
[]
2021-07-20T15:28:28
2021-07-22T20:48:19
2021-07-22T14:01:10
Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223
mariosasko
https://github.com/huggingface/datasets/pull/2684
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2684", "html_url": "https://github.com/huggingface/datasets/pull/2684", "diff_url": "https://github.com/huggingface/datasets/pull/2684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2684.patch", "merged_at": "2021-07-22T14:01...
true
948,721,379
2,683
Cache directories changed due to recent changes in how config kwargs are handled
closed
[]
2021-07-20T14:37:57
2021-07-20T16:27:15
2021-07-20T16:27:15
Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example: ```python from datasets import load_dataset_builder c4_builder = load_dataset_builder("c4", "en") print(c4_builder.cache_dir) # /Users/quentinlhoest/.cache/huggingfac...
lhoestq
https://github.com/huggingface/datasets/issues/2683
null
false
948,713,137
2,682
Fix c4 expected files
closed
[]
2021-07-20T14:29:31
2021-07-20T14:38:11
2021-07-20T14:38:10
Some files were not registered in the list of expected files to download Fix https://github.com/huggingface/datasets/issues/2677
lhoestq
https://github.com/huggingface/datasets/pull/2682
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2682", "html_url": "https://github.com/huggingface/datasets/pull/2682", "diff_url": "https://github.com/huggingface/datasets/pull/2682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2682.patch", "merged_at": "2021-07-20T14:38...
true
948,708,645
2,681
5 duplicate datasets
closed
[ "Yes this was documented in the PR that added this hf->paperswithcode mapping (https://github.com/huggingface/datasets/pull/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix\r\n\r\nFor context on the paperswithcode mapping you can also refer to https://github.com/huggingface/huggingfac...
2021-07-20T14:25:00
2021-07-20T15:44:17
2021-07-20T15:44:17
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838...
severo
https://github.com/huggingface/datasets/issues/2681
null
false
948,649,716
2,680
feat: ๐ŸŽธ add paperswithcode id for qasper dataset
closed
[]
2021-07-20T13:22:29
2021-07-20T14:04:10
2021-07-20T14:04:10
The reverse reference exists on paperswithcode: https://paperswithcode.com/dataset/qasper
severo
https://github.com/huggingface/datasets/pull/2680
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2680", "html_url": "https://github.com/huggingface/datasets/pull/2680", "diff_url": "https://github.com/huggingface/datasets/pull/2680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2680.patch", "merged_at": "2021-07-20T14:04...
true
948,506,638
2,679
Cannot load the blog_authorship_corpus due to codec errors
closed
[ "Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...", "Hi @izaskr, thanks...
2021-07-20T10:13:20
2021-07-21T17:02:21
2021-07-21T13:11:58
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error simila...
izaskr
https://github.com/huggingface/datasets/issues/2679
null
false
948,471,222
2,678
Import Error in Kaggle notebook
closed
[ "This looks like an issue with PyArrow. Did you try reinstalling it ?", "@lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error.\r\n\r\nEdit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstal...
2021-07-20T09:28:38
2021-07-21T13:59:26
2021-07-21T13:03:02
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-inp...
prikmm
https://github.com/huggingface/datasets/issues/2678
null
false
948,429,788
2,677
Error when downloading C4
closed
[ "Hi Thanks for reporting !\r\nIt looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)", "Alright this is fixed now. We'll do a new release soon to make the fix available.\r\n\r\nIn the meantime feel free to simply pass `ignore_verifications=True` to `load...
2021-07-20T08:37:30
2021-07-20T14:41:31
2021-07-20T14:38:10
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="ะกะฝะธะผะพะบ ัะบั€ะฐะฝะฐ 2...
Aktsvigun
https://github.com/huggingface/datasets/issues/2677
null
false
947,734,909
2,676
Increase json reader block_size automatically
closed
[]
2021-07-19T14:51:14
2021-07-19T17:51:39
2021-07-19T17:51:38
Currently some files can't be read with the default parameters of the JSON lines reader. For example this one: https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz raises a pyarrow error: ```python ArrowInvalid: straddling object straddles two block boundaries (try to increa...
lhoestq
https://github.com/huggingface/datasets/pull/2676
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2676", "html_url": "https://github.com/huggingface/datasets/pull/2676", "diff_url": "https://github.com/huggingface/datasets/pull/2676.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2676.patch", "merged_at": "2021-07-19T17:51...
true
947,657,732
2,675
Parallelize ETag requests
closed
[]
2021-07-19T13:30:42
2021-07-19T19:33:25
2021-07-19T19:33:25
Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed. In this I made the ETag requests parallel using multi...
lhoestq
https://github.com/huggingface/datasets/pull/2675
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2675", "html_url": "https://github.com/huggingface/datasets/pull/2675", "diff_url": "https://github.com/huggingface/datasets/pull/2675.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2675.patch", "merged_at": "2021-07-19T19:33...
true
947,338,202
2,674
Fix sacrebleu parameter name
closed
[]
2021-07-19T07:07:26
2021-07-19T08:07:03
2021-07-19T08:07:03
DONE: - Fix parameter name: `smooth` to `smooth_method`. - Improve kwargs description. - Align docs on using a metric. - Add example of passing additional arguments in using metrics. Related to #2669.
albertvillanova
https://github.com/huggingface/datasets/pull/2674
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2674", "html_url": "https://github.com/huggingface/datasets/pull/2674", "diff_url": "https://github.com/huggingface/datasets/pull/2674.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2674.patch", "merged_at": "2021-07-19T08:07...
true
947,300,008
2,673
Fix potential DuplicatedKeysError in SQuAD
closed
[]
2021-07-19T06:08:00
2021-07-19T07:08:03
2021-07-19T07:08:03
DONE: - Fix potential DiplicatedKeysError by ensuring keys are unique. - Align examples in the docs with SQuAD code. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
albertvillanova
https://github.com/huggingface/datasets/pull/2673
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2673", "html_url": "https://github.com/huggingface/datasets/pull/2673", "diff_url": "https://github.com/huggingface/datasets/pull/2673.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2673.patch", "merged_at": "2021-07-19T07:08...
true
947,294,605
2,672
Fix potential DuplicatedKeysError in LibriSpeech
closed
[]
2021-07-19T06:00:49
2021-07-19T06:28:57
2021-07-19T06:28:56
DONE: - Fix unnecessary path join. - Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
albertvillanova
https://github.com/huggingface/datasets/pull/2672
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2672", "html_url": "https://github.com/huggingface/datasets/pull/2672", "diff_url": "https://github.com/huggingface/datasets/pull/2672.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2672.patch", "merged_at": "2021-07-19T06:28...
true
947,273,875
2,671
Mesinesp development and training data sets have been added.
closed
[]
2021-07-19T05:14:38
2021-07-19T07:32:28
2021-07-19T06:45:50
https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records. The Mesinesp ...
aslihanuysall
https://github.com/huggingface/datasets/pull/2671
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2671", "html_url": "https://github.com/huggingface/datasets/pull/2671", "diff_url": "https://github.com/huggingface/datasets/pull/2671.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2671.patch", "merged_at": null }
true
947,120,709
2,670
Using sharding to parallelize indexing
open
[]
2021-07-18T21:26:26
2021-10-07T13:33:25
null
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping ...
ggdupont
https://github.com/huggingface/datasets/issues/2670
null
false
946,982,998
2,669
Metric kwargs are not passed to underlying external metric f1_score
closed
[]
2021-07-18T08:32:31
2021-07-18T18:36:05
2021-07-18T11:19:04
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to...
BramVanroy
https://github.com/huggingface/datasets/issues/2669
null
false
946,867,622
2,668
Add Russian SuperGLUE
closed
[]
2021-07-17T17:41:28
2021-07-29T11:50:31
2021-07-29T11:50:31
Hi, This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
slowwavesleep
https://github.com/huggingface/datasets/pull/2668
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2668", "html_url": "https://github.com/huggingface/datasets/pull/2668", "diff_url": "https://github.com/huggingface/datasets/pull/2668.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2668.patch", "merged_at": "2021-07-29T11:50...
true
946,861,908
2,667
Use tqdm from tqdm_utils
closed
[]
2021-07-17T17:06:35
2021-07-19T17:39:10
2021-07-19T17:32:00
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, the...
mariosasko
https://github.com/huggingface/datasets/pull/2667
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2667", "html_url": "https://github.com/huggingface/datasets/pull/2667", "diff_url": "https://github.com/huggingface/datasets/pull/2667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2667.patch", "merged_at": "2021-07-19T17:32...
true
946,825,140
2,666
Adds CodeClippy dataset [WIP]
closed
[]
2021-07-17T13:32:04
2023-07-26T23:06:01
2022-10-03T09:37:35
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week https://the-eye.eu/public/AI/training_data/code_clippy_data/
arampacha
https://github.com/huggingface/datasets/pull/2666
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2666", "html_url": "https://github.com/huggingface/datasets/pull/2666", "diff_url": "https://github.com/huggingface/datasets/pull/2666.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2666.patch", "merged_at": null }
true
946,822,036
2,665
Adds APPS dataset to the hub [WIP]
closed
[]
2021-07-17T13:13:17
2022-10-03T09:38:10
2022-10-03T09:38:10
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
arampacha
https://github.com/huggingface/datasets/pull/2665
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2665", "html_url": "https://github.com/huggingface/datasets/pull/2665", "diff_url": "https://github.com/huggingface/datasets/pull/2665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2665.patch", "merged_at": null }
true
946,552,273
2,663
[`to_json`] add multi-proc sharding support
closed
[]
2021-07-16T19:41:50
2021-09-13T13:56:37
2021-09-13T13:56:37
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i...
stas00
https://github.com/huggingface/datasets/issues/2663
null
false
946,470,815
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
closed
[]
2021-07-16T17:21:58
2021-08-25T14:53:01
2021-08-25T14:18:08
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. H...
lhoestq
https://github.com/huggingface/datasets/pull/2662
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2662", "html_url": "https://github.com/huggingface/datasets/pull/2662", "diff_url": "https://github.com/huggingface/datasets/pull/2662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2662.patch", "merged_at": "2021-08-25T14:18...
true