id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
919,099,218
https://api.github.com/repos/huggingface/datasets/issues/2485
https://github.com/huggingface/datasets/issues/2485
2,485
Implement layered building
open
0
2021-06-11T18:54:25
2021-06-11T18:54:25
null
albertvillanova
[ "enhancement" ]
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190): > My suggestion for this would be to have this enabled by default. > > Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered b...
false
919,092,635
https://api.github.com/repos/huggingface/datasets/issues/2484
https://github.com/huggingface/datasets/issues/2484
2,484
Implement loading a dataset builder
closed
1
2021-06-11T18:47:22
2021-07-05T10:45:57
2021-07-05T10:45:57
albertvillanova
[ "enhancement" ]
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
false
918,871,712
https://api.github.com/repos/huggingface/datasets/issues/2483
https://github.com/huggingface/datasets/pull/2483
2,483
Use gc.collect only when needed to avoid slow downs
closed
2
2021-06-11T15:09:30
2021-06-18T19:25:06
2021-06-11T15:31:36
lhoestq
[]
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482) However calling gc.collect too often causes significant slow downs (the CI run time doubled). So I just m...
true
918,846,027
https://api.github.com/repos/huggingface/datasets/issues/2482
https://github.com/huggingface/datasets/pull/2482
2,482
Allow to use tqdm>=4.50.0
closed
0
2021-06-11T14:49:21
2021-06-11T15:11:51
2021-06-11T15:11:50
lhoestq
[]
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232)) They were due to open arrow files not properly closed by pyarrow. Since https://github.com/huggin...
true
918,680,168
https://api.github.com/repos/huggingface/datasets/issues/2481
https://github.com/huggingface/datasets/issues/2481
2,481
Delete extracted files to save disk space
closed
1
2021-06-11T12:21:52
2021-07-19T09:08:18
2021-07-19T09:08:18
albertvillanova
[ "enhancement" ]
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
false
918,678,578
https://api.github.com/repos/huggingface/datasets/issues/2480
https://github.com/huggingface/datasets/issues/2480
2,480
Set download/extracted paths configurable
open
1
2021-06-11T12:20:24
2021-06-15T14:23:49
null
albertvillanova
[ "enhancement" ]
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives. TODO: - [x] Set configurable extracted datasets path: #2487 - [x] Set configurable downloaded datasets path: #2488 - [ ] Set configurable "incomplete" datasets path?
false
918,672,431
https://api.github.com/repos/huggingface/datasets/issues/2479
https://github.com/huggingface/datasets/pull/2479
2,479
❌ load_datasets ❌
closed
0
2021-06-11T12:14:36
2021-06-11T14:46:25
2021-06-11T14:46:25
julien-c
[]
true
918,507,510
https://api.github.com/repos/huggingface/datasets/issues/2478
https://github.com/huggingface/datasets/issues/2478
2,478
Create release script
open
1
2021-06-11T09:38:02
2023-07-20T13:22:23
null
albertvillanova
[ "enhancement" ]
Create a script so that releases can be done automatically (as done in `transformers`).
false
918,334,431
https://api.github.com/repos/huggingface/datasets/issues/2477
https://github.com/huggingface/datasets/pull/2477
2,477
Fix docs custom stable version
closed
4
2021-06-11T07:26:03
2021-06-14T09:14:20
2021-06-14T08:20:18
albertvillanova
[]
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
true
917,686,662
https://api.github.com/repos/huggingface/datasets/issues/2476
https://github.com/huggingface/datasets/pull/2476
2,476
Add TimeDial
closed
1
2021-06-10T18:33:07
2021-07-30T12:57:54
2021-07-30T12:57:54
bhavitvyamalik
[]
Dataset: https://github.com/google-research-datasets/TimeDial To-Do: Update README.md and add YAML tags
true
917,650,882
https://api.github.com/repos/huggingface/datasets/issues/2475
https://github.com/huggingface/datasets/issues/2475
2,475
Issue in timit_asr database
closed
2
2021-06-10T18:05:29
2021-06-13T08:13:50
2021-06-13T08:13:13
hrahamim
[ "bug" ]
## Describe the bug I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows). I am using the next code line dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) The above code result with the same sentence duplicated ten times. It al...
false
917,622,055
https://api.github.com/repos/huggingface/datasets/issues/2474
https://github.com/huggingface/datasets/issues/2474
2,474
cache_dir parameter for load_from_disk ?
closed
4
2021-06-10T17:39:36
2022-02-16T14:55:01
2022-02-16T14:55:00
chbensch
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache...
false
917,538,629
https://api.github.com/repos/huggingface/datasets/issues/2473
https://github.com/huggingface/datasets/pull/2473
2,473
Add Disfl-QA
closed
2
2021-06-10T16:18:00
2021-07-29T11:56:19
2021-07-29T11:56:18
bhavitvyamalik
[]
Dataset: https://github.com/google-research-datasets/disfl-qa To-Do: Update README.md and add YAML tags
true
917,463,821
https://api.github.com/repos/huggingface/datasets/issues/2472
https://github.com/huggingface/datasets/issues/2472
2,472
Fix automatic generation of Zenodo DOI
closed
4
2021-06-10T15:15:46
2021-06-14T16:49:42
2021-06-14T16:49:42
albertvillanova
[ "bug" ]
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
false
917,067,165
https://api.github.com/repos/huggingface/datasets/issues/2471
https://github.com/huggingface/datasets/issues/2471
2,471
Fix PermissionError on Windows when using tqdm >=4.50.0
closed
0
2021-06-10T08:31:49
2021-06-11T15:11:50
2021-06-11T15:11:50
albertvillanova
[ "bug" ]
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
false
916,724,260
https://api.github.com/repos/huggingface/datasets/issues/2470
https://github.com/huggingface/datasets/issues/2470
2,470
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
closed
6
2021-06-09T22:40:22
2021-07-01T09:34:54
2021-07-01T09:11:13
mbforbes
[ "bug" ]
## Describe the bug Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`. I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti...
false
916,440,418
https://api.github.com/repos/huggingface/datasets/issues/2469
https://github.com/huggingface/datasets/pull/2469
2,469
Bump tqdm version
closed
2
2021-06-09T17:24:40
2021-06-11T15:03:42
2021-06-11T15:03:36
lewtun
[]
true
916,427,320
https://api.github.com/repos/huggingface/datasets/issues/2468
https://github.com/huggingface/datasets/pull/2468
2,468
Implement ClassLabel encoding in JSON loader
closed
1
2021-06-09T17:08:54
2021-06-28T15:39:54
2021-06-28T15:05:35
albertvillanova
[]
Close #2365.
true
915,914,098
https://api.github.com/repos/huggingface/datasets/issues/2466
https://github.com/huggingface/datasets/pull/2466
2,466
change udpos features structure
closed
2
2021-06-09T08:03:31
2021-06-18T11:55:09
2021-06-16T10:41:37
cosmeowpawlitan
[]
The structure is change such that each example is a sentence The change is done for issues: #2061 #2444 Close #2061 , close #2444.
true
915,525,071
https://api.github.com/repos/huggingface/datasets/issues/2465
https://github.com/huggingface/datasets/pull/2465
2,465
adding masahaner dataset
closed
3
2021-06-08T21:20:25
2021-06-14T14:59:05
2021-06-14T14:59:05
dadelani
[]
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner @lhoestq , can you please review
true
915,485,601
https://api.github.com/repos/huggingface/datasets/issues/2464
https://github.com/huggingface/datasets/pull/2464
2,464
fix: adjusting indexing for the labels.
closed
1
2021-06-08T20:47:25
2021-06-09T10:15:46
2021-06-09T09:10:28
drugilsberg
[]
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES` After this change, the `README.md` now reflects the content of `dataset_infos.json`. Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
true
915,454,788
https://api.github.com/repos/huggingface/datasets/issues/2463
https://github.com/huggingface/datasets/pull/2463
2,463
Fix proto_qa download link
closed
0
2021-06-08T20:23:16
2021-06-10T12:49:56
2021-06-10T08:31:10
mariosasko
[]
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
true
915,384,613
https://api.github.com/repos/huggingface/datasets/issues/2462
https://github.com/huggingface/datasets/issues/2462
2,462
Merge DatasetDict and Dataset
open
2
2021-06-08T19:22:04
2023-08-16T09:34:34
null
albertvillanova
[ "enhancement", "generic discussion" ]
As discussed in #2424 and #2437 (please see there for detailed conversation): - It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset. - The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. - A user expects...
false
915,286,150
https://api.github.com/repos/huggingface/datasets/issues/2461
https://github.com/huggingface/datasets/pull/2461
2,461
Support sliced list arrays in cast
closed
0
2021-06-08T17:38:47
2021-06-08T17:56:24
2021-06-08T17:56:23
lhoestq
[]
There is this issue in pyarrow: ```python import pyarrow as pa arr = pa.array([[i * 10] for i in range(4)]) arr.cast(pa.list_(pa.int32())) # works arr = arr.slice(1) arr.cast(pa.list_(pa.int32())) # fails # ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented") ``` Howev...
true
915,268,536
https://api.github.com/repos/huggingface/datasets/issues/2460
https://github.com/huggingface/datasets/pull/2460
2,460
Revert default in-memory for small datasets
closed
1
2021-06-08T17:14:23
2021-06-08T18:04:14
2021-06-08T17:55:43
albertvillanova
[ "enhancement" ]
Close #2458
true
915,222,015
https://api.github.com/repos/huggingface/datasets/issues/2459
https://github.com/huggingface/datasets/issues/2459
2,459
`Proto_qa` hosting seems to be broken
closed
1
2021-06-08T16:16:32
2021-06-10T08:31:09
2021-06-10T08:31:09
VictorSanh
[ "bug" ]
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets impo...
false
915,199,693
https://api.github.com/repos/huggingface/datasets/issues/2458
https://github.com/huggingface/datasets/issues/2458
2,458
Revert default in-memory for small datasets
closed
1
2021-06-08T15:51:41
2021-06-08T18:57:11
2021-06-08T17:55:43
albertvillanova
[ "enhancement" ]
Users are reporting issues and confusion about setting default in-memory to True for small datasets. We see 2 clear use cases of Datasets: - the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation) - some edge cases (speed benchmarks, inter...
false
915,079,441
https://api.github.com/repos/huggingface/datasets/issues/2457
https://github.com/huggingface/datasets/pull/2457
2,457
Add align_labels_with_mapping function
closed
5
2021-06-08T13:54:00
2022-01-12T08:57:41
2021-06-17T09:56:52
lewtun
[]
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself. This will help us with the Hub evaluation, where we won't know in advance whether a model that...
true
914,709,293
https://api.github.com/repos/huggingface/datasets/issues/2456
https://github.com/huggingface/datasets/pull/2456
2,456
Fix cross-reference typos in documentation
closed
0
2021-06-08T09:45:14
2021-06-08T17:41:37
2021-06-08T17:41:36
albertvillanova
[]
Fix some minor typos in docs that avoid the creation of cross-reference links.
true
914,177,468
https://api.github.com/repos/huggingface/datasets/issues/2455
https://github.com/huggingface/datasets/pull/2455
2,455
Update version in xor_tydi_qa.py
closed
1
2021-06-08T02:23:45
2021-06-14T15:35:25
2021-06-14T15:35:25
changjonathanc
[]
Fix #2449 @lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`?
true
913,883,631
https://api.github.com/repos/huggingface/datasets/issues/2454
https://github.com/huggingface/datasets/pull/2454
2,454
Rename config and environment variable for in memory max size
closed
1
2021-06-07T19:21:08
2021-06-07T20:43:46
2021-06-07T20:43:46
albertvillanova
[]
As discussed in #2409, both config and environment variable have been renamed. cc: @stas00, huggingface/transformers#12056
true
913,729,258
https://api.github.com/repos/huggingface/datasets/issues/2453
https://github.com/huggingface/datasets/pull/2453
2,453
Keep original features order
closed
5
2021-06-07T16:26:38
2021-06-15T18:05:36
2021-06-15T15:43:48
albertvillanova
[]
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not. I found this issue while working on #2366.
true
913,603,877
https://api.github.com/repos/huggingface/datasets/issues/2452
https://github.com/huggingface/datasets/issues/2452
2,452
MRPC test set differences between torch and tensorflow datasets
closed
1
2021-06-07T14:20:26
2021-06-07T14:34:32
2021-06-07T14:34:32
FredericOdermatt
[ "bug" ]
## Describe the bug When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of import...
false
913,263,340
https://api.github.com/repos/huggingface/datasets/issues/2451
https://github.com/huggingface/datasets/pull/2451
2,451
Mention that there are no answers in adversarial_qa test set
closed
0
2021-06-07T08:13:57
2021-06-07T08:34:14
2021-06-07T08:34:13
lhoestq
[]
As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set
true
912,890,291
https://api.github.com/repos/huggingface/datasets/issues/2450
https://github.com/huggingface/datasets/issues/2450
2,450
BLUE file not found
closed
2
2021-06-06T17:01:54
2021-06-07T10:46:15
2021-06-07T10:46:15
mirfan899
[]
Hi, I'm having the following issue when I try to load the `blue` metric. ```shell import datasets metric = datasets.load_metric('blue') Traceback (most recent call last): File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module local...
false
912,751,752
https://api.github.com/repos/huggingface/datasets/issues/2449
https://github.com/huggingface/datasets/pull/2449
2,449
Update `xor_tydi_qa` url to v1.1
closed
6
2021-06-06T09:44:58
2021-06-07T15:16:21
2021-06-07T08:31:04
changjonathanc
[]
The dataset is updated and the old url no longer works. So I updated it. I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`). > And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to ...
true
912,360,109
https://api.github.com/repos/huggingface/datasets/issues/2448
https://github.com/huggingface/datasets/pull/2448
2,448
Fix flores download link
closed
0
2021-06-05T17:30:24
2021-06-08T20:02:58
2021-06-07T08:18:25
mariosasko
[]
true
912,299,527
https://api.github.com/repos/huggingface/datasets/issues/2447
https://github.com/huggingface/datasets/issues/2447
2,447
dataset adversarial_qa has no answers in the "test" set
closed
2
2021-06-05T14:57:38
2021-06-07T11:13:07
2021-06-07T11:13:07
bjascob
[ "bug" ]
## Describe the bug When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta') ## Steps to reproduce the bug ``` from datasets import load_dataset examples ...
false
911,635,399
https://api.github.com/repos/huggingface/datasets/issues/2446
https://github.com/huggingface/datasets/issues/2446
2,446
`yelp_polarity` is broken
closed
2
2021-06-04T15:44:29
2021-06-04T18:56:47
2021-06-04T18:56:47
JetRunner
[]
![image](https://user-images.githubusercontent.com/22514219/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)
false
911,577,578
https://api.github.com/repos/huggingface/datasets/issues/2445
https://github.com/huggingface/datasets/pull/2445
2,445
Fix broken URLs for bn_hate_speech and covid_tweets_japanese
closed
2
2021-06-04T14:53:35
2021-06-04T17:39:46
2021-06-04T17:39:45
lewtun
[]
Closes #2388
true
911,297,139
https://api.github.com/repos/huggingface/datasets/issues/2444
https://github.com/huggingface/datasets/issues/2444
2,444
Sentence Boundaries missing in Dataset: xtreme / udpos
closed
2
2021-06-04T09:10:26
2021-06-18T11:53:43
2021-06-18T11:53:43
cosmeowpawlitan
[ "bug" ]
I was browsing through annotation guidelines, as suggested by the datasets introduction. The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldepend...
false
909,983,574
https://api.github.com/repos/huggingface/datasets/issues/2443
https://github.com/huggingface/datasets/issues/2443
2,443
Some tests hang on Windows
closed
3
2021-06-03T00:27:30
2021-06-28T08:47:39
2021-06-28T08:47:39
mariosasko
[ "bug" ]
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr...
false
909,677,029
https://api.github.com/repos/huggingface/datasets/issues/2442
https://github.com/huggingface/datasets/pull/2442
2,442
add english language tags for ~100 datasets
closed
1
2021-06-02T16:24:56
2021-06-04T09:51:40
2021-06-04T09:51:39
VictorSanh
[]
As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs. Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...
true
908,554,713
https://api.github.com/repos/huggingface/datasets/issues/2441
https://github.com/huggingface/datasets/issues/2441
2,441
DuplicatedKeysError on personal dataset
closed
2
2021-06-01T17:59:41
2021-06-04T23:50:03
2021-06-04T23:50:03
lucaguarro
[ "bug" ]
## Describe the bug Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script. Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')` Note ...
false
908,521,954
https://api.github.com/repos/huggingface/datasets/issues/2440
https://github.com/huggingface/datasets/issues/2440
2,440
Remove `extended` field from dataset tagger
closed
4
2021-06-01T17:18:42
2021-06-09T09:06:31
2021-06-09T09:06:30
lewtun
[ "bug" ]
## Describe the bug While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included: ``` dataset_name = 'arcd' @pytest.m...
false
908,511,983
https://api.github.com/repos/huggingface/datasets/issues/2439
https://github.com/huggingface/datasets/pull/2439
2,439
Better error message when trying to access elements of a DatasetDict without specifying the split
closed
0
2021-06-01T17:04:32
2021-06-15T16:03:23
2021-06-07T08:54:35
lhoestq
[]
As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name. cc @thomwolf
true
908,461,914
https://api.github.com/repos/huggingface/datasets/issues/2438
https://github.com/huggingface/datasets/pull/2438
2,438
Fix NQ features loading: reorder fields of features to match nested fields order in arrow data
closed
0
2021-06-01T16:09:30
2021-06-04T09:02:31
2021-06-04T09:02:31
lhoestq
[]
As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema. To fix that I re-order the features based on the arrow schema: ```python inferred_fe...
true
908,108,882
https://api.github.com/repos/huggingface/datasets/issues/2437
https://github.com/huggingface/datasets/pull/2437
2,437
Better error message when using the wrong load_from_disk
closed
9
2021-06-01T09:43:22
2021-06-08T18:03:50
2021-06-08T18:03:50
lhoestq
[]
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
true
908,100,211
https://api.github.com/repos/huggingface/datasets/issues/2436
https://github.com/huggingface/datasets/pull/2436
2,436
Update DatasetMetadata and ReadMe
closed
0
2021-06-01T09:32:37
2021-06-14T13:23:27
2021-06-14T13:23:26
gchhablani
[]
This PR contains the changes discussed in #2395. **Edit**: In addition to those changes, I'll be updating the `ReadMe` as follows: Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors. One way to make `ReadMe` consistent...
true
907,505,531
https://api.github.com/repos/huggingface/datasets/issues/2435
https://github.com/huggingface/datasets/pull/2435
2,435
Insert Extractive QA templates for SQuAD-like datasets
closed
3
2021-05-31T14:09:11
2021-06-03T14:34:30
2021-06-03T14:32:27
lewtun
[]
This PR adds task templates for 9 SQuAD-like templates with the following properties: * 1 config * A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434) * Less than 20...
true
907,503,557
https://api.github.com/repos/huggingface/datasets/issues/2434
https://github.com/huggingface/datasets/issues/2434
2,434
Extend QuestionAnsweringExtractive template to handle nested columns
closed
2
2021-05-31T14:06:51
2022-10-05T17:06:28
2022-10-05T17:06:28
lewtun
[ "enhancement" ]
Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like: * `iapp_wiki_qa_squad` * `parsinlu_reading_comprehension` where the nested features differ with those from `squad` and trigger an `ArrowNot...
false
907,488,711
https://api.github.com/repos/huggingface/datasets/issues/2433
https://github.com/huggingface/datasets/pull/2433
2,433
Fix DuplicatedKeysError in adversarial_qa
closed
0
2021-05-31T13:48:47
2021-06-01T08:52:11
2021-06-01T08:52:11
mariosasko
[]
Fixes #2431
true
907,462,881
https://api.github.com/repos/huggingface/datasets/issues/2432
https://github.com/huggingface/datasets/pull/2432
2,432
Fix CI six installation on linux
closed
0
2021-05-31T13:15:36
2021-05-31T13:17:07
2021-05-31T13:17:06
lhoestq
[]
For some reason we end up with this error in the linux CI when running pip install .[tests] ``` pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequireme...
true
907,413,691
https://api.github.com/repos/huggingface/datasets/issues/2431
https://github.com/huggingface/datasets/issues/2431
2,431
DuplicatedKeysError when trying to load adversarial_qa
closed
1
2021-05-31T12:11:19
2021-06-01T08:54:03
2021-06-01T08:52:11
hanss0n
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python dataset = load_dataset('adversarial_qa', 'adversarialQA') ``` ## Expected results The dataset should be loaded into memory ## Actual results >DuplicatedKeysError: FAILURE TO GENERATE DATASET ...
false
907,322,595
https://api.github.com/repos/huggingface/datasets/issues/2430
https://github.com/huggingface/datasets/pull/2430
2,430
Add version-specific BibTeX
closed
4
2021-05-31T10:05:42
2021-06-08T07:53:22
2021-06-08T07:53:22
albertvillanova
[]
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release. This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project. See version-specific BibTeX entry here: https://zenodo.org/record/481776...
true
907,321,665
https://api.github.com/repos/huggingface/datasets/issues/2429
https://github.com/huggingface/datasets/pull/2429
2,429
Rename QuestionAnswering template to QuestionAnsweringExtractive
closed
1
2021-05-31T10:04:42
2021-05-31T15:57:26
2021-05-31T15:57:24
lewtun
[]
Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.
true
907,169,746
https://api.github.com/repos/huggingface/datasets/issues/2428
https://github.com/huggingface/datasets/pull/2428
2,428
Add copyright info for wiki_lingua dataset
closed
3
2021-05-31T07:22:52
2021-06-04T10:22:33
2021-06-04T10:22:33
PhilipMay
[]
true
907,162,923
https://api.github.com/repos/huggingface/datasets/issues/2427
https://github.com/huggingface/datasets/pull/2427
2,427
Add copyright info to MLSUM dataset
closed
2
2021-05-31T07:15:57
2021-06-04T09:53:50
2021-06-04T09:53:50
PhilipMay
[]
true
906,473,546
https://api.github.com/repos/huggingface/datasets/issues/2426
https://github.com/huggingface/datasets/issues/2426
2,426
Saving Graph/Structured Data in Datasets
closed
6
2021-05-29T13:35:21
2021-06-02T01:21:03
2021-06-02T01:21:03
gsh199449
[ "enhancement" ]
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty...
false
906,385,457
https://api.github.com/repos/huggingface/datasets/issues/2425
https://github.com/huggingface/datasets/pull/2425
2,425
Fix Docstring Mistake: dataset vs. metric
closed
4
2021-05-29T06:09:53
2021-06-01T08:18:04
2021-06-01T08:18:04
PhilipMay
[]
PR to fix #2412
true
906,193,679
https://api.github.com/repos/huggingface/datasets/issues/2424
https://github.com/huggingface/datasets/issues/2424
2,424
load_from_disk and save_to_disk are not compatible with each other
closed
6
2021-05-28T23:07:10
2021-06-08T19:22:32
2021-06-08T19:22:32
roholazandie
[]
## Describe the bug load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split ## Steps to reproduce the bug ...
false
905,935,753
https://api.github.com/repos/huggingface/datasets/issues/2423
https://github.com/huggingface/datasets/pull/2423
2,423
add `desc` in `map` for `DatasetDict` object
closed
3
2021-05-28T19:28:44
2021-05-31T14:51:23
2021-05-31T13:08:04
bhavitvyamalik
[]
`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well
true
905,568,548
https://api.github.com/repos/huggingface/datasets/issues/2422
https://github.com/huggingface/datasets/pull/2422
2,422
Fix save_to_disk nested features order in dataset_info.json
closed
0
2021-05-28T15:03:28
2021-05-28T15:26:57
2021-05-28T15:26:56
lhoestq
[]
Fix issue https://github.com/huggingface/datasets/issues/2267 The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.
true
905,549,756
https://api.github.com/repos/huggingface/datasets/issues/2421
https://github.com/huggingface/datasets/pull/2421
2,421
doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
closed
0
2021-05-28T14:52:10
2021-06-04T09:52:45
2021-06-04T09:52:45
borisdayma
[]
MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES
true
904,821,772
https://api.github.com/repos/huggingface/datasets/issues/2420
https://github.com/huggingface/datasets/pull/2420
2,420
Updated Dataset Description
closed
0
2021-05-28T07:10:51
2021-06-10T12:11:35
2021-06-10T12:11:35
binny-mathew
[]
Added Point of contact information and several other details about the dataset.
true
904,347,339
https://api.github.com/repos/huggingface/datasets/issues/2419
https://github.com/huggingface/datasets/pull/2419
2,419
adds license information for DailyDialog.
closed
5
2021-05-27T23:03:42
2021-05-31T13:16:52
2021-05-31T13:16:52
aditya2211
[]
true
904,051,497
https://api.github.com/repos/huggingface/datasets/issues/2418
https://github.com/huggingface/datasets/pull/2418
2,418
add utf-8 while reading README
closed
2
2021-05-27T18:12:28
2021-06-04T09:55:01
2021-06-04T09:55:00
bhavitvyamalik
[]
It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d
true
903,956,071
https://api.github.com/repos/huggingface/datasets/issues/2417
https://github.com/huggingface/datasets/pull/2417
2,417
Make datasets PEP-561 compliant
closed
1
2021-05-27T16:16:17
2021-05-28T13:10:10
2021-05-28T13:09:16
SBrandeis
[]
Allows to type-check datasets with `mypy` when imported as a third-party library PEP-561: https://www.python.org/dev/peps/pep-0561 MyPy doc on the subject: https://mypy.readthedocs.io/en/stable/installed_packages.html
true
903,932,299
https://api.github.com/repos/huggingface/datasets/issues/2416
https://github.com/huggingface/datasets/pull/2416
2,416
Add KLUE dataset
closed
7
2021-05-27T15:49:51
2021-06-09T15:00:02
2021-06-04T17:45:15
jungwhank
[]
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks). Please let me know if there's anything missing in the code or README. Thanks!
true
903,923,097
https://api.github.com/repos/huggingface/datasets/issues/2415
https://github.com/huggingface/datasets/issues/2415
2,415
Cached dataset not loaded
closed
5
2021-05-27T15:40:06
2021-06-02T13:15:47
2021-06-02T13:15:47
borisdayma
[ "bug" ]
## Describe the bug I have a large dataset (common_voice, english) where I use several map and filter functions. Sometimes my cached datasets after specific functions are not loaded. I always use the same arguments, same functions, no seed… ## Steps to reproduce the bug ```python def filter_by_duration(batch): ...
false
903,877,096
https://api.github.com/repos/huggingface/datasets/issues/2414
https://github.com/huggingface/datasets/pull/2414
2,414
Update README.md
closed
2
2021-05-27T14:53:19
2021-06-28T13:46:14
2021-06-28T13:04:56
cryoff
[]
Provides description of data instances and dataset features
true
903,777,557
https://api.github.com/repos/huggingface/datasets/issues/2413
https://github.com/huggingface/datasets/issues/2413
2,413
AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
closed
1
2021-05-27T13:44:28
2021-06-01T01:05:47
2021-06-01T01:05:47
jungwhank
[ "bug" ]
## Describe the bug Hello, I'm trying to add dataset and contribute, but test keep fail with below cli. ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>` ## Steps to reproduce the bug It seems like a bug when I see an error with the existing dataset,...
false
903,769,151
https://api.github.com/repos/huggingface/datasets/issues/2412
https://github.com/huggingface/datasets/issues/2412
2,412
Docstring mistake: dataset vs. metric
closed
1
2021-05-27T13:39:11
2021-06-01T08:18:04
2021-06-01T08:18:04
PhilipMay
[]
This: https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582 Should better be something like: `a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)` I can provide a PR l8er...
false
903,671,778
https://api.github.com/repos/huggingface/datasets/issues/2411
https://github.com/huggingface/datasets/pull/2411
2,411
Add DOI badge to README
closed
0
2021-05-27T12:36:47
2021-05-27T13:42:54
2021-05-27T13:42:54
albertvillanova
[]
Once published the latest release, the DOI badge has been automatically generated by Zenodo.
true
903,613,676
https://api.github.com/repos/huggingface/datasets/issues/2410
https://github.com/huggingface/datasets/pull/2410
2,410
fix #2391 add original answers in kilt-TriviaQA
closed
5
2021-05-27T11:54:29
2021-06-15T12:35:57
2021-06-14T17:29:10
PaulLerner
[]
cc @yjernite is it ok like this?
true
903,441,398
https://api.github.com/repos/huggingface/datasets/issues/2409
https://github.com/huggingface/datasets/pull/2409
2,409
Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
closed
14
2021-05-27T09:07:00
2021-06-08T16:00:55
2021-05-27T09:33:41
lhoestq
[]
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
true
903,422,648
https://api.github.com/repos/huggingface/datasets/issues/2408
https://github.com/huggingface/datasets/pull/2408
2,408
Fix head_qa keys
closed
0
2021-05-27T08:50:19
2021-05-27T09:05:37
2021-05-27T09:05:36
lhoestq
[]
There were duplicate in the keys, as mentioned in #2382
true
903,111,755
https://api.github.com/repos/huggingface/datasets/issues/2407
https://github.com/huggingface/datasets/issues/2407
2,407
.map() function got an unexpected keyword argument 'cache_file_name'
closed
3
2021-05-27T01:54:26
2021-05-27T13:46:40
2021-05-27T13:46:40
cindyxinyiwang
[ "bug" ]
## Describe the bug I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected...
false
902,643,844
https://api.github.com/repos/huggingface/datasets/issues/2406
https://github.com/huggingface/datasets/issues/2406
2,406
Add guide on using task templates to documentation
closed
0
2021-05-26T16:28:26
2022-10-05T17:07:00
2022-10-05T17:07:00
lewtun
[ "enhancement" ]
Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
false
901,227,658
https://api.github.com/repos/huggingface/datasets/issues/2405
https://github.com/huggingface/datasets/pull/2405
2,405
Add dataset tags
closed
1
2021-05-25T18:57:29
2021-05-26T16:54:16
2021-05-26T16:40:07
OyvindTafjord
[]
The dataset tags were provided by Peter Clark following the guide.
true
901,179,832
https://api.github.com/repos/huggingface/datasets/issues/2404
https://github.com/huggingface/datasets/pull/2404
2,404
Paperswithcode dataset mapping
closed
2
2021-05-25T18:14:26
2021-05-26T11:21:25
2021-05-26T11:17:18
julien-c
[]
This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards. As discussed: - `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side. - I've added this new key at the end of the yaml instead of ordering all keys alphabetically as...
true
900,059,014
https://api.github.com/repos/huggingface/datasets/issues/2403
https://github.com/huggingface/datasets/pull/2403
2,403
Free datasets with cache file in temp dir on exit
closed
0
2021-05-24T22:15:11
2021-05-26T17:25:19
2021-05-26T16:39:29
mariosasko
[]
This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir. Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function. Fixes #2402
true
900,025,329
https://api.github.com/repos/huggingface/datasets/issues/2402
https://github.com/huggingface/datasets/issues/2402
2,402
PermissionError on Windows when using temp dir for caching
closed
0
2021-05-24T21:22:59
2021-05-26T16:39:29
2021-05-26T16:39:29
mariosasko
[ "bug" ]
Currently, the following code raises a PermissionError on master if working on Windows: ```python # run as a script or call exit() in REPL to initiate the temp dir cleanup from datasets import * d = load_dataset("sst", split="train", keep_in_memory=False) set_caching_enabled(False) d.map(lambda ex: ex) ``` ...
false
899,910,521
https://api.github.com/repos/huggingface/datasets/issues/2401
https://github.com/huggingface/datasets/issues/2401
2,401
load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset"
closed
4
2021-05-24T18:38:53
2021-06-09T09:07:25
2021-06-09T09:07:25
jonrbates
[ "bug" ]
## Describe the bug load_dataset('natural_questions') throws ValueError ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset('natural_questions', split='validation[:10]') ``` ## Expected results Call to load_dataset returns data. ## Actual results ``` Using ...
false
899,867,212
https://api.github.com/repos/huggingface/datasets/issues/2400
https://github.com/huggingface/datasets/issues/2400
2,400
Concatenate several datasets with removed columns is not working.
closed
2
2021-05-24T17:40:15
2021-05-25T05:52:01
2021-05-25T05:51:59
philschmid
[ "bug" ]
## Describe the bug You can't concatenate datasets when you removed columns before. ## Steps to reproduce the bug ```python from datasets import load_dataset, concatenate_datasets wikiann= load_dataset("wikiann","en") wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"]) wikiann["test"] =...
false
899,853,610
https://api.github.com/repos/huggingface/datasets/issues/2399
https://github.com/huggingface/datasets/pull/2399
2,399
Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
closed
5
2021-05-24T17:19:15
2021-05-27T09:07:15
2021-05-26T16:07:54
albertvillanova
[]
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`. This will allow to turn off default behavior: loading in memory (and not caching) small datasets. Fix #2387.
true
899,511,837
https://api.github.com/repos/huggingface/datasets/issues/2398
https://github.com/huggingface/datasets/issues/2398
2,398
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
closed
1
2021-05-24T10:03:34
2022-10-05T17:13:49
2022-10-05T17:13:49
anassalamah
[ "bug" ]
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that a...
false
899,427,378
https://api.github.com/repos/huggingface/datasets/issues/2397
https://github.com/huggingface/datasets/pull/2397
2,397
Fix number of classes in indic_glue sna.bn dataset
closed
2
2021-05-24T08:18:55
2021-05-25T16:32:16
2021-05-25T16:32:16
albertvillanova
[]
As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11.
true
899,016,308
https://api.github.com/repos/huggingface/datasets/issues/2396
https://github.com/huggingface/datasets/issues/2396
2,396
strange datasets from OSCAR corpus
open
2
2021-05-23T13:06:02
2021-06-17T13:54:37
null
cosmeowpawlitan
[ "bug" ]
![image](https://user-images.githubusercontent.com/50871412/119260850-4f876b80-bc07-11eb-8894-124302600643.png) ![image](https://user-images.githubusercontent.com/50871412/119260875-675eef80-bc07-11eb-9da4-ee27567054ac.png) From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K...
false
898,762,730
https://api.github.com/repos/huggingface/datasets/issues/2395
https://github.com/huggingface/datasets/pull/2395
2,395
`pretty_name` for dataset in YAML tags
closed
19
2021-05-22T09:24:45
2022-09-23T13:29:14
2022-09-23T13:29:13
bhavitvyamalik
[ "dataset contribution" ]
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good. If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in t...
true
898,156,795
https://api.github.com/repos/huggingface/datasets/issues/2392
https://github.com/huggingface/datasets/pull/2392
2,392
Update text classification template labels in DatasetInfo __post_init__
closed
6
2021-05-21T15:29:41
2021-05-28T11:37:35
2021-05-28T11:37:32
lewtun
[]
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`. To avoid storing state in `Dataset...
true
898,128,099
https://api.github.com/repos/huggingface/datasets/issues/2391
https://github.com/huggingface/datasets/issues/2391
2,391
Missing original answers in kilt-TriviaQA
closed
2
2021-05-21T14:57:07
2021-06-14T17:29:11
2021-06-14T17:29:11
PaulLerner
[ "bug" ]
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets ## Describe the bug The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ...
false
897,903,642
https://api.github.com/repos/huggingface/datasets/issues/2390
https://github.com/huggingface/datasets/pull/2390
2,390
Add check for task templates on dataset load
closed
1
2021-05-21T10:16:57
2021-05-21T15:49:09
2021-05-21T15:49:06
lewtun
[]
This PR adds a check that the features of a dataset match the schema of each compatible task template.
true
897,822,270
https://api.github.com/repos/huggingface/datasets/issues/2389
https://github.com/huggingface/datasets/pull/2389
2,389
Insert task templates for text classification
closed
6
2021-05-21T08:36:26
2021-05-28T15:28:58
2021-05-28T15:26:28
lewtun
[]
This PR inserts text-classification templates for datasets with the following properties: * Only one config * At most two features of `(Value, ClassLabel)` type Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
true
897,767,470
https://api.github.com/repos/huggingface/datasets/issues/2388
https://github.com/huggingface/datasets/issues/2388
2,388
Incorrect URLs for some datasets
closed
0
2021-05-21T07:22:35
2021-06-04T17:39:45
2021-06-04T17:39:45
lewtun
[ "bug" ]
## Describe the bug It seems that the URLs for the following datasets are invalid: - [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a - [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covi...
false
897,566,666
https://api.github.com/repos/huggingface/datasets/issues/2387
https://github.com/huggingface/datasets/issues/2387
2,387
datasets 1.6 ignores cache
closed
13
2021-05-21T00:12:58
2021-05-26T16:07:54
2021-05-26T16:07:54
stas00
[ "bug" ]
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612 Quoting @VictorSanh: > > I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335): > > > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/...
false
897,560,049
https://api.github.com/repos/huggingface/datasets/issues/2386
https://github.com/huggingface/datasets/issues/2386
2,386
Accessing Arrow dataset cache_files
closed
1
2021-05-20T23:57:43
2021-05-21T19:18:03
2021-05-21T19:18:03
Mehrad0711
[ "bug" ]
## Describe the bug In datasets 1.5.0 the following code snippet would have printed the cache_files: ``` train_data = load_dataset('conll2003', split='train', cache_dir='data') print(train_data.cache_files[0]['filename']) ``` However, in the newest release (1.6.1), it prints an empty list. I also tried l...
false
897,206,823
https://api.github.com/repos/huggingface/datasets/issues/2385
https://github.com/huggingface/datasets/pull/2385
2,385
update citations
closed
0
2021-05-20T17:54:08
2021-05-21T12:38:18
2021-05-21T12:38:18
adeepH
[]
To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
true
896,866,461
https://api.github.com/repos/huggingface/datasets/issues/2384
https://github.com/huggingface/datasets/pull/2384
2,384
Add args description to DatasetInfo
closed
2
2021-05-20T13:53:10
2021-05-22T09:26:16
2021-05-22T09:26:14
lewtun
[]
Closes #2354 I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.
true
895,779,723
https://api.github.com/repos/huggingface/datasets/issues/2383
https://github.com/huggingface/datasets/pull/2383
2,383
Improve example in rounding docs
closed
0
2021-05-19T18:59:23
2021-05-21T12:53:22
2021-05-21T12:36:29
mariosasko
[]
Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding.
true