id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
830,279,098
2,043
Support pickle protocol for dataset splits defined as ReadInstruction
closed
[]
2021-03-12T16:35:11
2021-03-16T14:25:38
2021-03-16T14:05:05
Fixes #2022 (+ some style fixes)
mariosasko
https://github.com/huggingface/datasets/pull/2043
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2043", "html_url": "https://github.com/huggingface/datasets/pull/2043", "diff_url": "https://github.com/huggingface/datasets/pull/2043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2043.patch", "merged_at": "2021-03-16T14:05...
true
830,190,276
2,042
Fix arrow memory checks issue in tests
closed
[]
2021-03-12T14:49:52
2021-03-12T15:04:23
2021-03-12T15:04:22
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing th...
lhoestq
https://github.com/huggingface/datasets/pull/2042
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042", "html_url": "https://github.com/huggingface/datasets/pull/2042", "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "merged_at": "2021-03-12T15:04...
true
830,180,803
2,041
Doc2dial update data_infos and data_loaders
closed
[]
2021-03-12T14:39:29
2021-03-16T11:09:20
2021-03-16T11:09:20
songfeng
https://github.com/huggingface/datasets/pull/2041
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2041", "html_url": "https://github.com/huggingface/datasets/pull/2041", "diff_url": "https://github.com/huggingface/datasets/pull/2041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2041.patch", "merged_at": "2021-03-16T11:09...
true
830,169,387
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
closed
[]
2021-03-12T14:27:00
2021-08-04T18:00:43
2021-08-04T18:00:43
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yie...
simonschoe
https://github.com/huggingface/datasets/issues/2040
null
false
830,047,652
2,039
Doc2dial rc
closed
[]
2021-03-12T11:56:28
2021-03-12T15:32:36
2021-03-12T15:32:36
Added fix to handle the last turn that is a user turn.
songfeng
https://github.com/huggingface/datasets/pull/2039
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2039", "html_url": "https://github.com/huggingface/datasets/pull/2039", "diff_url": "https://github.com/huggingface/datasets/pull/2039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2039.patch", "merged_at": null }
true
830,036,875
2,038
outdated dataset_infos.json might fail verifications
closed
[]
2021-03-12T11:41:54
2021-03-16T16:27:40
2021-03-16T16:27:40
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
songfeng
https://github.com/huggingface/datasets/issues/2038
null
false
829,919,685
2,037
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
closed
[]
2021-03-12T09:22:00
2021-03-23T06:08:16
2021-03-16T11:01:22
see: https://github.com/huggingface/datasets/issues/2031 What I did: - replace root.clear with elem.clear - remove lines to get root element - $ make style - $ make test - some tests required some pip packages, I installed them. test results on origin/master and my branch are same. I think it's not related...
miyamonz
https://github.com/huggingface/datasets/pull/2037
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2037", "html_url": "https://github.com/huggingface/datasets/pull/2037", "diff_url": "https://github.com/huggingface/datasets/pull/2037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2037.patch", "merged_at": "2021-03-16T11:01...
true
829,909,258
2,036
Cannot load wikitext
closed
[]
2021-03-12T09:09:39
2021-03-15T08:45:02
2021-03-15T08:44:44
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.p...
Gpwner
https://github.com/huggingface/datasets/issues/2036
null
false
829,475,544
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
closed
[]
2021-03-11T19:54:54
2024-03-15T16:09:49
2024-03-15T16:09:48
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I rea...
dorost1234
https://github.com/huggingface/datasets/issues/2035
null
false
829,381,388
2,034
Fix typo
closed
[]
2021-03-11T17:46:13
2021-03-11T18:06:25
2021-03-11T18:06:25
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
pcyin
https://github.com/huggingface/datasets/pull/2034
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2034", "html_url": "https://github.com/huggingface/datasets/pull/2034", "diff_url": "https://github.com/huggingface/datasets/pull/2034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2034.patch", "merged_at": "2021-03-11T18:06...
true
829,295,339
2,033
Raise an error for outdated sacrebleu versions
closed
[]
2021-03-11T16:08:00
2021-03-11T17:58:12
2021-03-11T17:58:12
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12 For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py): ```python def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force...
lhoestq
https://github.com/huggingface/datasets/pull/2033
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2033", "html_url": "https://github.com/huggingface/datasets/pull/2033", "diff_url": "https://github.com/huggingface/datasets/pull/2033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2033.patch", "merged_at": "2021-03-11T17:58...
true
829,250,912
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
closed
[]
2021-03-11T15:18:50
2024-01-19T13:26:32
2024-01-19T13:26:32
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - i...
lhoestq
https://github.com/huggingface/datasets/issues/2032
null
false
829,122,778
2,031
wikipedia.py generator that extracts XML doesn't release memory
closed
[]
2021-03-11T12:51:24
2021-03-22T08:33:52
2021-03-22T08:33:52
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikip...
miyamonz
https://github.com/huggingface/datasets/issues/2031
null
false
829,110,803
2,030
Implement Dataset from text
closed
[]
2021-03-11T12:34:50
2021-03-18T13:29:29
2021-03-18T13:29:29
Implement `Dataset.from_text`. Analogue to #1943, #1946.
albertvillanova
https://github.com/huggingface/datasets/pull/2030
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2030", "html_url": "https://github.com/huggingface/datasets/pull/2030", "diff_url": "https://github.com/huggingface/datasets/pull/2030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2030.patch", "merged_at": "2021-03-18T13:29...
true
829,097,290
2,029
Loading a faiss index KeyError
closed
[]
2021-03-11T12:16:13
2021-03-12T00:21:09
2021-03-12T00:21:09
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (d...
nbroad1881
https://github.com/huggingface/datasets/issues/2029
null
false
828,721,393
2,028
Adding PersiNLU reading-comprehension
closed
[]
2021-03-11T04:41:13
2021-03-15T09:39:57
2021-03-15T09:39:57
danyaljj
https://github.com/huggingface/datasets/pull/2028
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028", "html_url": "https://github.com/huggingface/datasets/pull/2028", "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "merged_at": "2021-03-15T09:39...
true
828,490,444
2,027
Update format columns in Dataset.rename_columns
closed
[]
2021-03-10T23:50:59
2021-03-11T14:38:40
2021-03-11T14:38:40
Fixes #2026
mariosasko
https://github.com/huggingface/datasets/pull/2027
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2027", "html_url": "https://github.com/huggingface/datasets/pull/2027", "diff_url": "https://github.com/huggingface/datasets/pull/2027.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2027.patch", "merged_at": "2021-03-11T14:38...
true
828,194,467
2,026
KeyError on using map after renaming a column
closed
[]
2021-03-10T18:54:17
2021-03-11T14:39:34
2021-03-11T14:38:40
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),...
gchhablani
https://github.com/huggingface/datasets/issues/2026
null
false
828,047,476
2,025
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
closed
[]
2021-03-10T17:00:47
2021-03-30T14:46:53
2021-03-26T16:51:59
## Intro Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pick...
lhoestq
https://github.com/huggingface/datasets/pull/2025
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2025", "html_url": "https://github.com/huggingface/datasets/pull/2025", "diff_url": "https://github.com/huggingface/datasets/pull/2025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2025.patch", "merged_at": "2021-03-26T16:51...
true
827,842,962
2,024
Remove print statement from mnist.py
closed
[]
2021-03-10T14:39:58
2021-03-11T18:03:52
2021-03-11T18:03:51
gchhablani
https://github.com/huggingface/datasets/pull/2024
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2024", "html_url": "https://github.com/huggingface/datasets/pull/2024", "diff_url": "https://github.com/huggingface/datasets/pull/2024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2024.patch", "merged_at": null }
true
827,819,608
2,023
Add Romanian to XQuAD
closed
[]
2021-03-10T14:24:32
2021-03-15T10:08:17
2021-03-15T10:08:17
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
M-Salti
https://github.com/huggingface/datasets/pull/2023
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2023", "html_url": "https://github.com/huggingface/datasets/pull/2023", "diff_url": "https://github.com/huggingface/datasets/pull/2023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2023.patch", "merged_at": "2021-03-15T10:08...
true
827,435,033
2,022
ValueError when rename_column on splitted dataset
closed
[]
2021-03-10T09:40:38
2025-02-05T13:36:07
2021-03-16T14:05:05
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_datase...
simonschoe
https://github.com/huggingface/datasets/issues/2022
null
false
826,988,016
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
closed
[]
2021-03-10T02:48:34
2021-03-13T10:07:41
2021-03-13T10:07:41
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a seri...
shamanez
https://github.com/huggingface/datasets/issues/2021
null
false
826,961,126
2,020
Remove unnecessary docstart check in conll-like datasets
closed
[]
2021-03-10T02:20:16
2021-03-11T13:33:37
2021-03-11T13:33:37
Related to this PR: #1998 Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
mariosasko
https://github.com/huggingface/datasets/pull/2020
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2020", "html_url": "https://github.com/huggingface/datasets/pull/2020", "diff_url": "https://github.com/huggingface/datasets/pull/2020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2020.patch", "merged_at": "2021-03-11T13:33...
true
826,625,706
2,019
Replace print with logging in dataset scripts
closed
[]
2021-03-09T20:59:34
2021-03-12T10:09:01
2021-03-11T16:14:19
Replaces `print(...)` in the dataset scripts with the library logger.
mariosasko
https://github.com/huggingface/datasets/pull/2019
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2019", "html_url": "https://github.com/huggingface/datasets/pull/2019", "diff_url": "https://github.com/huggingface/datasets/pull/2019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2019.patch", "merged_at": "2021-03-11T16:14...
true
826,473,764
2,018
Md gender card update
closed
[]
2021-03-09T18:57:20
2021-03-12T17:31:00
2021-03-12T17:31:00
I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I...
mcmillanmajora
https://github.com/huggingface/datasets/pull/2018
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2018", "html_url": "https://github.com/huggingface/datasets/pull/2018", "diff_url": "https://github.com/huggingface/datasets/pull/2018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2018.patch", "merged_at": "2021-03-12T17:31...
true
826,428,578
2,017
Add TF-based Features to handle different modes of data
closed
[]
2021-03-09T18:29:52
2021-03-17T12:32:08
2021-03-17T12:32:07
Hi, I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.
gchhablani
https://github.com/huggingface/datasets/pull/2017
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2017", "html_url": "https://github.com/huggingface/datasets/pull/2017", "diff_url": "https://github.com/huggingface/datasets/pull/2017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2017.patch", "merged_at": null }
true
825,965,493
2,016
Not all languages have 2 digit codes.
closed
[]
2021-03-09T13:53:39
2021-03-11T18:01:03
2021-03-11T18:01:03
.
asiddhant
https://github.com/huggingface/datasets/pull/2016
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2016", "html_url": "https://github.com/huggingface/datasets/pull/2016", "diff_url": "https://github.com/huggingface/datasets/pull/2016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2016.patch", "merged_at": "2021-03-11T18:01...
true
825,942,108
2,015
Fix ipython function creation in tests
closed
[]
2021-03-09T13:36:59
2021-03-09T14:06:04
2021-03-09T14:06:03
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created. Fix #2010
lhoestq
https://github.com/huggingface/datasets/pull/2015
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2015", "html_url": "https://github.com/huggingface/datasets/pull/2015", "diff_url": "https://github.com/huggingface/datasets/pull/2015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2015.patch", "merged_at": "2021-03-09T14:06...
true
825,916,531
2,014
more explicit method parameters
closed
[]
2021-03-09T13:18:29
2021-03-10T10:08:37
2021-03-10T10:08:36
re: #2009 not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.
theo-m
https://github.com/huggingface/datasets/pull/2014
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2014", "html_url": "https://github.com/huggingface/datasets/pull/2014", "diff_url": "https://github.com/huggingface/datasets/pull/2014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2014.patch", "merged_at": "2021-03-10T10:08...
true
825,694,305
2,013
Add Cryptonite dataset
closed
[]
2021-03-09T10:32:11
2021-03-09T19:27:07
2021-03-09T19:27:06
cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite
theo-m
https://github.com/huggingface/datasets/pull/2013
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2013", "html_url": "https://github.com/huggingface/datasets/pull/2013", "diff_url": "https://github.com/huggingface/datasets/pull/2013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2013.patch", "merged_at": "2021-03-09T19:27...
true
825,634,064
2,012
No upstream branch
closed
[]
2021-03-09T09:48:55
2021-03-09T11:33:31
2021-03-09T11:33:31
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
theo-m
https://github.com/huggingface/datasets/issues/2012
null
false
825,621,952
2,011
Add RoSent Dataset
closed
[]
2021-03-09T09:40:08
2021-03-11T18:00:52
2021-03-11T18:00:52
This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529. I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique. Let me know in case of any issues.
gchhablani
https://github.com/huggingface/datasets/pull/2011
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2011", "html_url": "https://github.com/huggingface/datasets/pull/2011", "diff_url": "https://github.com/huggingface/datasets/pull/2011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2011.patch", "merged_at": "2021-03-11T18:00...
true
825,567,635
2,010
Local testing fails
closed
[]
2021-03-09T09:01:38
2021-03-09T14:06:03
2021-03-09T14:06:03
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED...
theo-m
https://github.com/huggingface/datasets/issues/2010
null
false
825,541,366
2,009
Ambiguous documentation
closed
[]
2021-03-09T08:42:11
2021-03-12T15:01:34
2021-03-12T15:01:34
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR...
theo-m
https://github.com/huggingface/datasets/issues/2009
null
false
825,153,804
2,008
Fix various typos/grammer in the docs
closed
[]
2021-03-09T01:39:28
2021-03-15T18:42:49
2021-03-09T10:21:32
This PR: * fixes various typos/grammer I came across while reading the docs * adds the "Install with conda" installation instructions Closes #1959
mariosasko
https://github.com/huggingface/datasets/pull/2008
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2008", "html_url": "https://github.com/huggingface/datasets/pull/2008", "diff_url": "https://github.com/huggingface/datasets/pull/2008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2008.patch", "merged_at": "2021-03-09T10:21...
true
824,518,158
2,007
How to not load huggingface datasets into memory
closed
[]
2021-03-08T12:35:26
2021-08-04T18:02:25
2021-08-04T18:02:25
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_...
dorost1234
https://github.com/huggingface/datasets/issues/2007
null
false
824,457,794
2,006
Don't gitignore dvc.lock
closed
[]
2021-03-08T11:13:08
2021-03-08T11:28:35
2021-03-08T11:28:34
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of ``` ERROR: 'dvc.lock' is git-ignored. ``` I removed the dvc.lock file from the gitignore to fix that
lhoestq
https://github.com/huggingface/datasets/pull/2006
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2006", "html_url": "https://github.com/huggingface/datasets/pull/2006", "diff_url": "https://github.com/huggingface/datasets/pull/2006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2006.patch", "merged_at": "2021-03-08T11:28...
true
824,275,035
2,005
Setting to torch format not working with torchvision and MNIST
closed
[]
2021-03-08T07:38:11
2021-03-09T17:58:13
2021-03-09T17:58:13
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labe...
gchhablani
https://github.com/huggingface/datasets/issues/2005
null
false
824,080,760
2,004
LaRoSeDa
closed
[]
2021-03-08T01:06:32
2021-03-17T10:43:20
2021-03-17T10:43:20
Add LaRoSeDa to huggingface datasets.
MihaelaGaman
https://github.com/huggingface/datasets/pull/2004
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2004", "html_url": "https://github.com/huggingface/datasets/pull/2004", "diff_url": "https://github.com/huggingface/datasets/pull/2004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2004.patch", "merged_at": "2021-03-17T10:43...
true
824,034,678
2,003
Messages are being printed to the `stdout`
closed
[]
2021-03-07T22:09:34
2023-07-25T16:35:21
2023-07-25T16:35:21
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ...
mahnerak
https://github.com/huggingface/datasets/issues/2003
null
false
823,955,744
2,002
MOROCO
closed
[]
2021-03-07T16:22:17
2021-03-19T09:52:06
2021-03-19T09:52:06
Add MOROCO to huggingface datasets.
MihaelaGaman
https://github.com/huggingface/datasets/pull/2002
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2002", "html_url": "https://github.com/huggingface/datasets/pull/2002", "diff_url": "https://github.com/huggingface/datasets/pull/2002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2002.patch", "merged_at": "2021-03-19T09:52...
true
823,946,706
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
closed
[]
2021-03-07T15:41:35
2022-12-19T19:25:14
2021-03-17T05:51:01
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983...
donggyukimc
https://github.com/huggingface/datasets/issues/2001
null
false
823,899,910
2,000
Windows Permission Error (most recent version of datasets)
closed
[]
2021-03-07T11:55:28
2021-03-09T12:42:57
2021-03-09T12:42:57
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am...
itsLuisa
https://github.com/huggingface/datasets/issues/2000
null
false
823,753,591
1,999
Add FashionMNIST dataset
closed
[]
2021-03-06T21:36:57
2021-03-09T09:52:11
2021-03-09T09:52:11
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
gchhablani
https://github.com/huggingface/datasets/pull/1999
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1999", "html_url": "https://github.com/huggingface/datasets/pull/1999", "diff_url": "https://github.com/huggingface/datasets/pull/1999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1999.patch", "merged_at": "2021-03-09T09:52...
true
823,723,960
1,998
Add -DOCSTART- note to dataset card of conll-like datasets
closed
[]
2021-03-06T19:08:29
2021-03-11T02:20:07
2021-03-11T02:20:07
Closes #1983
mariosasko
https://github.com/huggingface/datasets/pull/1998
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1998", "html_url": "https://github.com/huggingface/datasets/pull/1998", "diff_url": "https://github.com/huggingface/datasets/pull/1998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1998.patch", "merged_at": null }
true
823,679,465
1,997
from datasets import MoleculeDataset, GEOMDataset
closed
[]
2021-03-06T15:50:19
2021-03-06T16:13:26
2021-03-06T16:13:26
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
futianfan
https://github.com/huggingface/datasets/issues/1997
null
false
823,573,410
1,996
Error when exploring `arabic_speech_corpus`
closed
[]
2021-03-06T05:55:20
2022-10-05T13:24:26
2022-10-05T13:24:26
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/p...
elgeish
https://github.com/huggingface/datasets/issues/1996
null
false
822,878,431
1,995
[Timit_asr] Make sure not only the first sample is used
closed
[]
2021-03-05T08:42:51
2021-06-30T06:25:53
2021-03-05T08:58:59
When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded.
patrickvonplaten
https://github.com/huggingface/datasets/pull/1995
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1995", "html_url": "https://github.com/huggingface/datasets/pull/1995", "diff_url": "https://github.com/huggingface/datasets/pull/1995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1995.patch", "merged_at": "2021-03-05T08:58...
true
822,871,238
1,994
not being able to get wikipedia es language
open
[]
2021-03-05T08:31:48
2021-03-11T20:46:21
null
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs...
dorost1234
https://github.com/huggingface/datasets/issues/1994
null
false
822,758,387
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
closed
[]
2021-03-05T05:25:50
2021-03-22T04:05:50
2021-03-22T04:05:50
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original da...
shamanez
https://github.com/huggingface/datasets/issues/1993
null
false
822,672,238
1,992
`datasets.map` multi processing much slower than single processing
open
[]
2021-03-05T02:10:02
2024-06-08T20:18:03
null
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok...
hwijeen
https://github.com/huggingface/datasets/issues/1992
null
false
822,554,473
1,991
Adding the conllpp dataset
closed
[]
2021-03-04T22:19:43
2021-03-17T10:37:39
2021-03-17T10:37:39
Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910.
ZihanWangKi
https://github.com/huggingface/datasets/pull/1991
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1991", "html_url": "https://github.com/huggingface/datasets/pull/1991", "diff_url": "https://github.com/huggingface/datasets/pull/1991.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1991.patch", "merged_at": "2021-03-17T10:37...
true
822,384,502
1,990
OSError: Memory mapping file failed: Cannot allocate memory
closed
[]
2021-03-04T18:21:58
2021-08-04T18:04:25
2021-08-04T18:04:25
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multi...
dorost1234
https://github.com/huggingface/datasets/issues/1990
null
false
822,328,147
1,989
Question/problem with dataset labels
closed
[]
2021-03-04T17:06:53
2023-07-24T14:39:33
2023-07-24T14:39:33
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_...
ioana-blue
https://github.com/huggingface/datasets/issues/1989
null
false
822,324,605
1,988
Readme.md is misleading about kinds of datasets?
closed
[]
2021-03-04T17:04:20
2021-08-04T18:05:23
2021-08-04T18:05:23
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You menti...
surak
https://github.com/huggingface/datasets/issues/1988
null
false
822,308,956
1,987
wmt15 is broken
closed
[]
2021-03-04T16:46:25
2022-10-05T13:12:26
2022-10-05T13:12:26
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken: ``` python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")' Downloading: 2.91kB [00:00, 818kB/s] Downloading: 3.02kB [00:00, 897kB/s] Downloading: 41.1kB [00:00, 19.1MB/s] Downloading and preparing da...
stas00
https://github.com/huggingface/datasets/issues/1987
null
false
822,176,290
1,986
wmt datasets fail to load
closed
[]
2021-03-04T14:18:55
2021-03-04T14:31:07
2021-03-04T14:31:07
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager) 758 # Extract manually downloaded files. 759 manual_files = dl_manager.extract(manual_paths_dict) --> 760 e...
sabania
https://github.com/huggingface/datasets/issues/1986
null
false
822,170,651
1,985
Optimize int precision
closed
[]
2021-03-04T14:12:23
2021-03-22T12:04:40
2021-03-16T09:44:00
Optimize int precision to reduce dataset file size. Close #1973, close #1825, close #861.
albertvillanova
https://github.com/huggingface/datasets/pull/1985
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1985", "html_url": "https://github.com/huggingface/datasets/pull/1985", "diff_url": "https://github.com/huggingface/datasets/pull/1985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1985.patch", "merged_at": "2021-03-16T09:44...
true
821,816,588
1,984
Add tests for WMT datasets
closed
[]
2021-03-04T06:46:42
2022-11-04T14:19:16
2022-11-04T14:19:16
As requested in #1981, we need tests for WMT datasets, using dummy data.
albertvillanova
https://github.com/huggingface/datasets/issues/1984
null
false
821,746,008
1,983
The size of CoNLL-2003 is not consistant with the official release.
closed
[]
2021-03-04T04:41:34
2022-10-05T13:13:26
2022-10-05T13:13:26
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
h-peng17
https://github.com/huggingface/datasets/issues/1983
null
false
821,448,791
1,982
Fix NestedDataStructure.data for empty dict
closed
[]
2021-03-03T20:16:51
2021-03-04T16:46:04
2021-03-03T22:48:36
Fix #1981
albertvillanova
https://github.com/huggingface/datasets/pull/1982
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1982", "html_url": "https://github.com/huggingface/datasets/pull/1982", "diff_url": "https://github.com/huggingface/datasets/pull/1982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1982.patch", "merged_at": "2021-03-03T22:48...
true
821,411,109
1,981
wmt datasets fail to load
closed
[]
2021-03-03T19:21:39
2021-03-04T14:16:47
2021-03-03T22:48:36
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150...
stas00
https://github.com/huggingface/datasets/issues/1981
null
false
821,312,810
1,980
Loading all answers from drop
closed
[]
2021-03-03T17:13:07
2021-03-15T11:27:26
2021-03-15T11:27:26
Hello all, I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date"). I updated the script with the version I use for my work. However, I could...
KaijuML
https://github.com/huggingface/datasets/pull/1980
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1980", "html_url": "https://github.com/huggingface/datasets/pull/1980", "diff_url": "https://github.com/huggingface/datasets/pull/1980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1980.patch", "merged_at": "2021-03-15T11:27...
true
820,977,853
1,979
Add article_id and process test set template for semeval 2020 task 11…
closed
[]
2021-03-03T10:34:32
2021-03-13T10:59:40
2021-03-12T13:10:50
… dataset - `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/ - The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for t...
hemildesai
https://github.com/huggingface/datasets/pull/1979
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1979", "html_url": "https://github.com/huggingface/datasets/pull/1979", "diff_url": "https://github.com/huggingface/datasets/pull/1979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1979.patch", "merged_at": "2021-03-12T13:10...
true
820,956,806
1,978
Adding ro sts dataset
closed
[]
2021-03-03T10:08:53
2021-03-05T10:00:14
2021-03-05T09:33:55
Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset
lorinczb
https://github.com/huggingface/datasets/pull/1978
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1978", "html_url": "https://github.com/huggingface/datasets/pull/1978", "diff_url": "https://github.com/huggingface/datasets/pull/1978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1978.patch", "merged_at": "2021-03-05T09:33...
true
820,312,022
1,977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
open
[]
2021-03-02T19:21:28
2021-03-03T10:17:40
null
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l...
dorost1234
https://github.com/huggingface/datasets/issues/1977
null
false
820,228,538
1,976
Add datasets full offline mode with HF_DATASETS_OFFLINE
closed
[]
2021-03-02T17:26:59
2021-03-03T15:45:31
2021-03-03T15:45:30
Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939 cc @stas00
lhoestq
https://github.com/huggingface/datasets/pull/1976
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1976", "html_url": "https://github.com/huggingface/datasets/pull/1976", "diff_url": "https://github.com/huggingface/datasets/pull/1976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1976.patch", "merged_at": "2021-03-03T15:45...
true
820,205,485
1,975
Fix flake8
closed
[]
2021-03-02T16:59:13
2021-03-04T10:43:22
2021-03-04T10:43:22
Fix flake8 style.
albertvillanova
https://github.com/huggingface/datasets/pull/1975
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1975", "html_url": "https://github.com/huggingface/datasets/pull/1975", "diff_url": "https://github.com/huggingface/datasets/pull/1975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1975.patch", "merged_at": "2021-03-04T10:43...
true
820,122,223
1,974
feat(docs): navigate with left/right arrow keys
closed
[]
2021-03-02T15:24:50
2021-03-04T10:44:12
2021-03-04T10:42:48
Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot. More info : https://github.com/sphinx-doc/sphinx/pull/2064 You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
ydcjeff
https://github.com/huggingface/datasets/pull/1974
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1974", "html_url": "https://github.com/huggingface/datasets/pull/1974", "diff_url": "https://github.com/huggingface/datasets/pull/1974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1974.patch", "merged_at": "2021-03-04T10:42...
true
820,077,312
1,973
Question: what gets stored in the datasets cache and why is it so huge?
closed
[]
2021-03-02T14:35:53
2021-03-30T14:03:59
2021-03-16T09:44:00
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in...
ioana-blue
https://github.com/huggingface/datasets/issues/1973
null
false
819,752,761
1,972
'Dataset' object has no attribute 'rename_column'
closed
[]
2021-03-02T08:01:49
2022-06-01T16:08:47
2022-06-01T16:08:47
'Dataset' object has no attribute 'rename_column'
farooqzaman1
https://github.com/huggingface/datasets/issues/1972
null
false
819,714,231
1,971
Fix ArrowWriter closes stream at exit
closed
[]
2021-03-02T07:12:34
2021-03-10T16:36:57
2021-03-10T16:36:57
Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method. Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` ...
albertvillanova
https://github.com/huggingface/datasets/pull/1971
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1971", "html_url": "https://github.com/huggingface/datasets/pull/1971", "diff_url": "https://github.com/huggingface/datasets/pull/1971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1971.patch", "merged_at": "2021-03-10T16:36...
true
819,500,620
1,970
Fixing the URL filtering for bad MLSUM examples in GEM
closed
[]
2021-03-02T01:22:58
2021-03-02T03:19:06
2021-03-02T02:01:33
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r cc @sebastianGehrmann
yjernite
https://github.com/huggingface/datasets/pull/1970
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1970", "html_url": "https://github.com/huggingface/datasets/pull/1970", "diff_url": "https://github.com/huggingface/datasets/pull/1970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1970.patch", "merged_at": "2021-03-02T02:01...
true
819,129,568
1,967
Add Turkish News Category Dataset - 270K - Lite Version
closed
[]
2021-03-01T18:21:59
2021-03-02T17:25:00
2021-03-02T17:25:00
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains...
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/1967
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1967", "html_url": "https://github.com/huggingface/datasets/pull/1967", "diff_url": "https://github.com/huggingface/datasets/pull/1967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1967.patch", "merged_at": "2021-03-02T17:25...
true
819,101,253
1,966
Fix metrics collision in separate multiprocessed experiments
closed
[]
2021-03-01T17:45:18
2021-03-02T13:05:45
2021-03-02T13:05:44
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup. Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the cor...
lhoestq
https://github.com/huggingface/datasets/pull/1966
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1966", "html_url": "https://github.com/huggingface/datasets/pull/1966", "diff_url": "https://github.com/huggingface/datasets/pull/1966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1966.patch", "merged_at": "2021-03-02T13:05...
true
818,833,460
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
closed
[]
2021-03-01T12:47:34
2021-03-04T19:40:56
2021-03-04T19:40:42
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process...
shamanez
https://github.com/huggingface/datasets/issues/1965
null
false
818,624,864
1,964
Datasets.py function load_dataset does not match squad dataset
closed
[]
2021-03-01T08:41:31
2022-10-05T13:09:47
2022-10-05T13:09:47
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
LeopoldACC
https://github.com/huggingface/datasets/issues/1964
null
false
818,289,967
1,963
bug in SNLI dataset
closed
[]
2021-02-28T19:36:20
2022-10-05T13:13:46
2022-10-05T13:13:46
Hi There is label of -1 in train set of SNLI dataset, please find the code below: ``` import numpy as np import datasets data = datasets.load_dataset("snli")["train"] labels = [] for d in data: labels.append(d["label"]) print(np.unique(labels)) ``` and results: `[-1 0 1 2]` version of datas...
dorost1234
https://github.com/huggingface/datasets/issues/1963
null
false
818,089,156
1,962
Fix unused arguments
closed
[]
2021-02-28T02:47:07
2021-03-11T02:18:17
2021-03-03T16:37:50
Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.
mariosasko
https://github.com/huggingface/datasets/pull/1962
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1962", "html_url": "https://github.com/huggingface/datasets/pull/1962", "diff_url": "https://github.com/huggingface/datasets/pull/1962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1962.patch", "merged_at": "2021-03-03T16:37...
true
818,077,947
1,961
Add sst dataset
closed
[]
2021-02-28T02:08:29
2021-03-04T10:38:53
2021-03-04T10:38:53
Related to #1934&mdash;Add the Stanford Sentiment Treebank dataset.
patpizio
https://github.com/huggingface/datasets/pull/1961
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1961", "html_url": "https://github.com/huggingface/datasets/pull/1961", "diff_url": "https://github.com/huggingface/datasets/pull/1961.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1961.patch", "merged_at": "2021-03-04T10:38...
true
818,073,154
1,960
Allow stateful function in dataset.map
closed
[]
2021-02-28T01:29:05
2021-03-23T15:26:49
2021-03-23T15:26:49
Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example. Fixes #1940 @lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this.
mariosasko
https://github.com/huggingface/datasets/pull/1960
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1960", "html_url": "https://github.com/huggingface/datasets/pull/1960", "diff_url": "https://github.com/huggingface/datasets/pull/1960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1960.patch", "merged_at": "2021-03-23T15:26...
true
818,055,644
1,959
Bug in skip_rows argument of load_dataset function ?
closed
[]
2021-02-27T23:32:54
2021-03-09T10:21:32
2021-03-09T10:21:32
Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names `t...
LedaguenelArthur
https://github.com/huggingface/datasets/issues/1959
null
false
818,037,548
1,958
XSum dataset download link broken
closed
[]
2021-02-27T21:47:56
2021-02-27T21:50:16
2021-02-27T21:50:16
I did ``` from datasets import load_dataset dataset = load_dataset("xsum") ``` This returns `ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz`
himat
https://github.com/huggingface/datasets/issues/1958
null
false
818,013,741
1,956
[distributed env] potentially unsafe parallel execution
closed
[]
2021-02-27T20:38:45
2021-03-01T17:24:42
2021-03-01T17:24:42
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu...
stas00
https://github.com/huggingface/datasets/issues/1956
null
false
818,010,664
1,955
typos + grammar
closed
[]
2021-02-27T20:21:43
2021-03-01T17:20:38
2021-03-01T14:43:19
This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability. N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and no...
stas00
https://github.com/huggingface/datasets/pull/1955
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1955", "html_url": "https://github.com/huggingface/datasets/pull/1955", "diff_url": "https://github.com/huggingface/datasets/pull/1955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1955.patch", "merged_at": "2021-03-01T14:43...
true
817,565,563
1,954
add a new column
closed
[]
2021-02-26T18:17:27
2021-04-29T14:50:43
2021-04-29T14:50:43
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
dorost1234
https://github.com/huggingface/datasets/issues/1954
null
false
817,498,869
1,953
Documentation for to_csv, to_pandas and to_dict
closed
[]
2021-02-26T16:35:49
2021-03-01T14:03:48
2021-03-01T14:03:47
I added these methods to the documentation with a small paragraph. I also fixed some formatting issues in the docstrings
lhoestq
https://github.com/huggingface/datasets/pull/1953
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1953", "html_url": "https://github.com/huggingface/datasets/pull/1953", "diff_url": "https://github.com/huggingface/datasets/pull/1953.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1953.patch", "merged_at": "2021-03-01T14:03...
true
817,428,160
1,952
Handle timeouts
closed
[]
2021-02-26T15:02:07
2021-03-01T14:29:24
2021-03-01T14:29:24
As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset. This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00 I added a default timeout, and included an option to our offline environment for tests to...
lhoestq
https://github.com/huggingface/datasets/pull/1952
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1952", "html_url": "https://github.com/huggingface/datasets/pull/1952", "diff_url": "https://github.com/huggingface/datasets/pull/1952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1952.patch", "merged_at": "2021-03-01T14:29...
true
817,423,573
1,951
Add cross-platform support for datasets-cli
closed
[]
2021-02-26T14:56:25
2021-03-11T02:18:26
2021-02-26T15:30:26
One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `en...
mariosasko
https://github.com/huggingface/datasets/pull/1951
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1951", "html_url": "https://github.com/huggingface/datasets/pull/1951", "diff_url": "https://github.com/huggingface/datasets/pull/1951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1951.patch", "merged_at": "2021-02-26T15:30...
true
817,295,235
1,950
updated multi_nli dataset with missing fields
closed
[]
2021-02-26T11:54:36
2021-03-01T11:08:30
2021-03-01T11:08:29
1) updated fields which were missing earlier 2) added tags to README 3) updated a few fields of README 4) new dataset_infos.json and dummy files
bhavitvyamalik
https://github.com/huggingface/datasets/pull/1950
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1950", "html_url": "https://github.com/huggingface/datasets/pull/1950", "diff_url": "https://github.com/huggingface/datasets/pull/1950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1950.patch", "merged_at": "2021-03-01T11:08...
true
816,986,936
1,949
Enable Fast Filtering using Arrow Dataset
open
[]
2021-02-26T02:53:37
2021-02-26T19:18:29
null
Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble...
gchhablani
https://github.com/huggingface/datasets/issues/1949
null
false
816,689,329
1,948
dataset loading logger level
closed
[]
2021-02-25T18:33:37
2023-07-12T17:19:30
2023-07-12T17:19:30
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arr...
stas00
https://github.com/huggingface/datasets/issues/1948
null
false
816,590,299
1,947
Update documentation with not in place transforms and update DatasetDict
closed
[]
2021-02-25T16:23:18
2021-03-01T14:36:54
2021-03-01T14:36:53
In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`. I added them to the documentation and added a paragraph on how to use them You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-r...
lhoestq
https://github.com/huggingface/datasets/pull/1947
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1947", "html_url": "https://github.com/huggingface/datasets/pull/1947", "diff_url": "https://github.com/huggingface/datasets/pull/1947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1947.patch", "merged_at": "2021-03-01T14:36...
true
816,526,294
1,946
Implement Dataset from CSV
closed
[]
2021-02-25T15:10:13
2021-03-12T09:42:48
2021-03-12T09:42:48
Implement `Dataset.from_csv`. Analogue to #1943. If finally, the scripts should be used instead, at least we can reuse the tests here.
albertvillanova
https://github.com/huggingface/datasets/pull/1946
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1946", "html_url": "https://github.com/huggingface/datasets/pull/1946", "diff_url": "https://github.com/huggingface/datasets/pull/1946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1946.patch", "merged_at": "2021-03-12T09:42...
true
816,421,966
1,945
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
closed
[]
2021-02-25T13:09:45
2021-02-25T13:20:35
2021-02-25T13:20:26
Hi I am trying to concatenate a list of huggingface datastes as: ` train_dataset = datasets.concatenate_datasets(train_datasets) ` Here is the `train_datasets` when I print: ``` [Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows...
dorost1234
https://github.com/huggingface/datasets/issues/1945
null
false
816,267,216
1,944
Add Turkish News Category Dataset (270K - Lite Version)
closed
[]
2021-02-25T09:45:22
2021-03-02T17:46:41
2021-03-01T18:23:21
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contai...
yavuzKomecoglu
https://github.com/huggingface/datasets/pull/1944
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1944", "html_url": "https://github.com/huggingface/datasets/pull/1944", "diff_url": "https://github.com/huggingface/datasets/pull/1944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1944.patch", "merged_at": null }
true
816,160,453
1,943
Implement Dataset from JSON and JSON Lines
closed
[]
2021-02-25T07:17:33
2021-03-18T09:42:08
2021-03-18T09:42:08
Implement `Dataset.from_jsonl`.
albertvillanova
https://github.com/huggingface/datasets/pull/1943
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1943", "html_url": "https://github.com/huggingface/datasets/pull/1943", "diff_url": "https://github.com/huggingface/datasets/pull/1943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1943.patch", "merged_at": "2021-03-18T09:42...
true
816,037,520
1,942
[experiment] missing default_experiment-1-0.arrow
closed
[]
2021-02-25T03:02:15
2022-10-05T13:08:45
2022-10-05T13:08:45
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
stas00
https://github.com/huggingface/datasets/issues/1942
null
false
815,985,167
1,941
Loading of FAISS index fails for index_name = 'exact'
closed
[]
2021-02-25T01:30:54
2021-02-25T14:28:46
2021-02-25T14:28:46
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source o...
mkserge
https://github.com/huggingface/datasets/issues/1941
null
false