id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
690,197,725
https://api.github.com/repos/huggingface/datasets/issues/555
https://github.com/huggingface/datasets/pull/555
555
Upgrade pip in benchmark github action
closed
0
2020-09-01T14:37:26
2020-09-01T15:26:16
2020-09-01T15:26:15
lhoestq
[]
It looks like it fixes the `import nlp` issue we have
true
690,173,214
https://api.github.com/repos/huggingface/datasets/issues/554
https://github.com/huggingface/datasets/issues/554
554
nlp downloads to its module path
closed
8
2020-09-01T14:06:14
2020-09-11T06:19:24
2020-09-11T06:19:24
danieldk
[]
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
false
690,143,182
https://api.github.com/repos/huggingface/datasets/issues/553
https://github.com/huggingface/datasets/pull/553
553
[Fix GitHub Actions] test adding tmate
closed
0
2020-09-01T13:28:03
2021-05-05T18:24:38
2020-09-03T09:01:13
thomwolf
[]
true
690,079,429
https://api.github.com/repos/huggingface/datasets/issues/552
https://github.com/huggingface/datasets/pull/552
552
Add multiprocessing
closed
10
2020-09-01T11:56:17
2020-09-22T15:11:56
2020-09-02T10:01:25
lhoestq
[]
Adding multiprocessing to `.map` It works in 3 steps: - shard the dataset in `num_proc` shards - spawn one process per shard and call `map` on them - concatenate the resulting datasets Example of usage: ```python from nlp import load_dataset dataset = load_dataset("squad", split="train") def function...
true
690,034,762
https://api.github.com/repos/huggingface/datasets/issues/551
https://github.com/huggingface/datasets/pull/551
551
added HANS dataset
closed
0
2020-09-01T10:42:02
2020-09-01T12:17:10
2020-09-01T12:17:10
TevenLeScao
[]
Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems.
true
689,775,914
https://api.github.com/repos/huggingface/datasets/issues/550
https://github.com/huggingface/datasets/pull/550
550
[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
closed
2
2020-09-01T03:27:03
2020-09-03T09:06:01
2020-09-03T09:06:01
gaguilar
[]
Hi, I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory: ``` python nlp-cli test ./datasets/lince --save_infos --all_co...
true
689,766,465
https://api.github.com/repos/huggingface/datasets/issues/549
https://github.com/huggingface/datasets/pull/549
549
Fix bleurt logging import
closed
2
2020-09-01T03:01:25
2020-09-03T18:04:46
2020-09-03T09:04:20
jbragg
[]
Bleurt started throwing an error in some code we have. This looks like the fix but... It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems). Any way for us to pin your metrics code so that they are guaranteed not...
true
689,285,996
https://api.github.com/repos/huggingface/datasets/issues/548
https://github.com/huggingface/datasets/pull/548
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
closed
5
2020-08-31T15:15:41
2020-09-08T10:19:58
2020-09-08T10:19:57
thomwolf
[]
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
true
689,268,589
https://api.github.com/repos/huggingface/datasets/issues/547
https://github.com/huggingface/datasets/pull/547
547
[Distributed] Making loading distributed datasets a bit safer
closed
0
2020-08-31T14:51:34
2020-08-31T15:16:30
2020-08-31T15:16:29
thomwolf
[]
Add some file-locks during dataset loading
true
689,186,526
https://api.github.com/repos/huggingface/datasets/issues/546
https://github.com/huggingface/datasets/issues/546
546
Very slow data loading on large dataset
closed
28
2020-08-31T12:57:23
2024-01-02T20:26:24
2020-09-08T10:19:57
agemagician
[]
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
false
689,138,878
https://api.github.com/repos/huggingface/datasets/issues/545
https://github.com/huggingface/datasets/issues/545
545
New release coming up for this library
closed
1
2020-08-31T11:37:38
2021-01-13T10:59:04
2021-01-13T10:59:04
thomwolf
[]
Hi all, A few words on the roadmap for this library. The next release will be a big one and is planed at the end of this week. In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will: - have support f...
false
689,062,519
https://api.github.com/repos/huggingface/datasets/issues/544
https://github.com/huggingface/datasets/pull/544
544
[Distributed] Fix load_dataset error when multiprocessing + add test
closed
0
2020-08-31T09:30:10
2020-08-31T11:15:11
2020-08-31T11:15:10
thomwolf
[]
Fix #543 + add test
true
688,644,407
https://api.github.com/repos/huggingface/datasets/issues/543
https://github.com/huggingface/datasets/issues/543
543
nlp.load_dataset is not safe for multi processes when loading from local files
closed
1
2020-08-30T03:20:34
2020-08-31T11:15:10
2020-08-31T11:15:10
luyug
[]
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])` concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438 Likel...
false
688,555,036
https://api.github.com/repos/huggingface/datasets/issues/542
https://github.com/huggingface/datasets/pull/542
542
Add TensorFlow example
closed
0
2020-08-29T15:39:27
2020-08-31T09:49:20
2020-08-31T09:49:19
jplu
[]
Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour.
true
688,521,224
https://api.github.com/repos/huggingface/datasets/issues/541
https://github.com/huggingface/datasets/issues/541
541
Best practices for training tokenizers with nlp
closed
1
2020-08-29T12:06:49
2022-10-04T17:28:04
2022-10-04T17:28:04
moskomule
[]
Hi, thank you for developing this library. What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
false
688,475,884
https://api.github.com/repos/huggingface/datasets/issues/540
https://github.com/huggingface/datasets/pull/540
540
[BUGFIX] Fix Race Dataset Checksum bug
closed
4
2020-08-29T07:00:10
2020-09-18T11:42:20
2020-09-18T11:42:20
abarbosa94
[]
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :) Moreover, I have added some descriptions.
true
688,323,602
https://api.github.com/repos/huggingface/datasets/issues/539
https://github.com/huggingface/datasets/issues/539
539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
closed
3
2020-08-28T19:55:51
2020-09-03T16:34:02
2020-09-03T16:34:01
gaguilar
[]
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appea...
false
688,015,912
https://api.github.com/repos/huggingface/datasets/issues/538
https://github.com/huggingface/datasets/pull/538
538
[logging] Add centralized logging - Bump-up cache loads to warnings
closed
0
2020-08-28T11:42:29
2020-08-31T11:42:51
2020-08-31T11:42:51
thomwolf
[]
Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO). You can use: ``` nlp.logging.set_verbosity(verbosity: int) nlp.logging.set_verbosity_info() nlp.logging.set_verbosity_warning() nlp.logging.set_verbosity_debug...
true
687,614,699
https://api.github.com/repos/huggingface/datasets/issues/537
https://github.com/huggingface/datasets/issues/537
537
[Dataset] RACE dataset Checksums error
closed
9
2020-08-27T23:58:16
2020-09-18T12:07:04
2020-09-18T12:07:04
abarbosa94
[ "dataset bug" ]
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
false
687,378,332
https://api.github.com/repos/huggingface/datasets/issues/536
https://github.com/huggingface/datasets/pull/536
536
Fingerprint
closed
1
2020-08-27T16:27:09
2020-08-31T14:20:40
2020-08-31T14:20:39
lhoestq
[]
This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc. However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table. To fix t...
true
686,238,315
https://api.github.com/repos/huggingface/datasets/issues/535
https://github.com/huggingface/datasets/pull/535
535
Benchmarks
closed
0
2020-08-26T11:21:26
2020-08-27T08:40:00
2020-08-27T08:39:59
thomwolf
[]
Adding some benchmarks with DVC/CML To add a new tracked benchmark: - create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`. - add a new pipeline stage in [dvc.yaml](./dvc.yaml) w...
true
686,115,912
https://api.github.com/repos/huggingface/datasets/issues/534
https://github.com/huggingface/datasets/issues/534
534
`list_datasets()` is broken.
closed
3
2020-08-26T08:19:01
2020-08-27T06:31:11
2020-08-27T06:31:11
ashutosh-dwivedi-e3502
[]
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/py...
false
685,585,914
https://api.github.com/repos/huggingface/datasets/issues/533
https://github.com/huggingface/datasets/pull/533
533
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays
closed
0
2020-08-25T15:32:44
2020-08-26T08:02:24
2020-08-26T08:02:23
lhoestq
[]
It should fix the CI problems in #513
true
685,540,614
https://api.github.com/repos/huggingface/datasets/issues/532
https://github.com/huggingface/datasets/issues/532
532
File exists error when used with TPU
open
21
2020-08-25T14:36:38
2020-09-01T12:14:56
null
go-inoue
[]
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/...
false
685,291,036
https://api.github.com/repos/huggingface/datasets/issues/531
https://github.com/huggingface/datasets/pull/531
531
add concatenate_datasets to the docs
closed
0
2020-08-25T08:40:05
2020-08-25T09:02:20
2020-08-25T09:02:19
lhoestq
[]
true
684,825,612
https://api.github.com/repos/huggingface/datasets/issues/530
https://github.com/huggingface/datasets/pull/530
530
use ragged tensor by default
closed
4
2020-08-24T17:06:15
2021-10-22T19:38:40
2020-08-24T19:22:25
lhoestq
[]
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow. Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a r...
true
684,797,157
https://api.github.com/repos/huggingface/datasets/issues/529
https://github.com/huggingface/datasets/pull/529
529
Add MLSUM
closed
3
2020-08-24T16:18:35
2020-08-26T08:04:11
2020-08-26T08:04:11
RachelKer
[]
Hello (again :) !), So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess. However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the s...
true
684,673,673
https://api.github.com/repos/huggingface/datasets/issues/528
https://github.com/huggingface/datasets/pull/528
528
fix missing variable names in docs
closed
1
2020-08-24T13:31:48
2020-08-25T09:04:04
2020-08-25T09:04:03
lhoestq
[]
fix #524
true
684,632,930
https://api.github.com/repos/huggingface/datasets/issues/527
https://github.com/huggingface/datasets/pull/527
527
Fix config used for slow test on real dataset
closed
0
2020-08-24T12:39:34
2020-08-25T09:20:45
2020-08-25T09:20:44
lhoestq
[]
As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters. To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load...
true
684,615,455
https://api.github.com/repos/huggingface/datasets/issues/526
https://github.com/huggingface/datasets/pull/526
526
Returning None instead of "python" if dataset is unformatted
closed
2
2020-08-24T12:10:35
2020-08-24T12:50:43
2020-08-24T12:50:42
TevenLeScao
[]
Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`.
true
683,875,483
https://api.github.com/repos/huggingface/datasets/issues/525
https://github.com/huggingface/datasets/issues/525
525
wmt download speed example
closed
8
2020-08-21T23:29:06
2022-10-04T17:45:39
2022-10-04T17:45:39
sshleifer
[]
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
false
683,686,359
https://api.github.com/repos/huggingface/datasets/issues/524
https://github.com/huggingface/datasets/issues/524
524
Some docs are missing parameter names
closed
1
2020-08-21T16:47:34
2020-08-25T09:04:03
2020-08-25T09:04:03
jarednielsen
[]
See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.
false
682,573,232
https://api.github.com/repos/huggingface/datasets/issues/523
https://github.com/huggingface/datasets/pull/523
523
Speed up Tokenization by optimizing cast_to_python_objects
closed
1
2020-08-20T09:42:02
2020-08-24T08:54:15
2020-08-24T08:54:14
lhoestq
[]
I changed how `cast_to_python_objects` works to make it faster. It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively. To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted. If the first element needs to be...
true
682,478,833
https://api.github.com/repos/huggingface/datasets/issues/522
https://github.com/huggingface/datasets/issues/522
522
dictionnary typo in docs
closed
1
2020-08-20T07:11:05
2020-08-20T07:52:14
2020-08-20T07:52:13
yonigottesman
[]
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
false
682,477,648
https://api.github.com/repos/huggingface/datasets/issues/521
https://github.com/huggingface/datasets/pull/521
521
Fix dictionnary (dictionary) typo
closed
1
2020-08-20T07:09:02
2020-08-20T07:52:04
2020-08-20T07:52:04
yonigottesman
[]
This error happens many times I'm thinking maybe its spelled like this on purpose?
true
682,264,839
https://api.github.com/repos/huggingface/datasets/issues/520
https://github.com/huggingface/datasets/pull/520
520
Transform references for sacrebleu
closed
1
2020-08-20T00:26:55
2020-08-20T09:30:54
2020-08-20T09:30:53
jbragg
[]
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and r...
true
682,193,882
https://api.github.com/repos/huggingface/datasets/issues/519
https://github.com/huggingface/datasets/issues/519
519
[BUG] Metrics throwing new error on master since 0.4.0
closed
2
2020-08-19T21:29:15
2022-06-02T16:41:01
2020-08-19T22:04:40
jbragg
[]
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) ...
false
682,131,165
https://api.github.com/repos/huggingface/datasets/issues/518
https://github.com/huggingface/datasets/pull/518
518
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics
closed
2
2020-08-19T19:43:08
2020-08-24T16:01:40
2020-08-24T16:01:39
thomwolf
[]
Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation. Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances. Changes significantly the caching behavior for the metri...
true
681,896,944
https://api.github.com/repos/huggingface/datasets/issues/517
https://github.com/huggingface/datasets/issues/517
517
add MLDoc dataset
open
2
2020-08-19T14:41:59
2021-08-03T05:59:33
null
jxmorris12
[ "dataset request" ]
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages...
false
681,846,032
https://api.github.com/repos/huggingface/datasets/issues/516
https://github.com/huggingface/datasets/pull/516
516
[Breaking] Rename formated to formatted
closed
0
2020-08-19T13:35:23
2020-08-20T08:41:17
2020-08-20T08:41:16
lhoestq
[]
`formated` is not correct but `formatted` is
true
681,845,619
https://api.github.com/repos/huggingface/datasets/issues/515
https://github.com/huggingface/datasets/pull/515
515
Fix batched map for formatted dataset
closed
0
2020-08-19T13:34:50
2020-08-20T20:30:43
2020-08-20T20:30:42
lhoestq
[]
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
true
681,256,348
https://api.github.com/repos/huggingface/datasets/issues/514
https://github.com/huggingface/datasets/issues/514
514
dataset.shuffle(keep_in_memory=True) is never allowed
closed
10
2020-08-18T18:47:40
2022-10-10T12:21:58
2022-10-10T12:21:58
vegarab
[ "good first issue", "hacktoberfest" ]
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
false
681,215,612
https://api.github.com/repos/huggingface/datasets/issues/513
https://github.com/huggingface/datasets/pull/513
513
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
closed
4
2020-08-18T17:36:02
2020-08-28T08:41:51
2020-08-28T08:41:50
thomwolf
[]
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests. All the samples re-ordering/selecti...
true
681,137,164
https://api.github.com/repos/huggingface/datasets/issues/512
https://github.com/huggingface/datasets/pull/512
512
Delete CONTRIBUTING.md
closed
2
2020-08-18T15:33:25
2020-08-18T15:48:21
2020-08-18T15:39:07
ChenZehong13
[]
true
681,055,553
https://api.github.com/repos/huggingface/datasets/issues/511
https://github.com/huggingface/datasets/issues/511
511
dataset.shuffle() and select() resets format. Intended?
closed
5
2020-08-18T13:46:01
2020-09-14T08:45:38
2020-09-14T08:45:38
vegarab
[]
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
false
680,823,644
https://api.github.com/repos/huggingface/datasets/issues/510
https://github.com/huggingface/datasets/issues/510
510
Version of numpy to use the library
closed
2
2020-08-18T08:59:13
2020-08-19T18:35:56
2020-08-19T18:35:56
isspek
[]
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Th...
false
679,711,585
https://api.github.com/repos/huggingface/datasets/issues/509
https://github.com/huggingface/datasets/issues/509
509
Converting TensorFlow dataset example
closed
2
2020-08-16T08:05:20
2021-08-03T06:01:18
2021-08-03T06:01:17
saareliad
[]
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
false
679,705,734
https://api.github.com/repos/huggingface/datasets/issues/508
https://github.com/huggingface/datasets/issues/508
508
TypeError: Receiver() takes no arguments
closed
5
2020-08-16T07:18:16
2020-09-01T14:53:33
2020-09-01T14:49:03
sebastiantomac
[]
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
false
679,400,683
https://api.github.com/repos/huggingface/datasets/issues/507
https://github.com/huggingface/datasets/issues/507
507
Errors when I use
closed
1
2020-08-14T21:03:57
2020-08-14T21:39:10
2020-08-14T21:39:10
mchari
[]
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoToke...
false
679,164,788
https://api.github.com/repos/huggingface/datasets/issues/506
https://github.com/huggingface/datasets/pull/506
506
fix dataset.map for function without outputs
closed
0
2020-08-14T13:40:22
2020-08-17T11:24:39
2020-08-17T11:24:38
lhoestq
[]
As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable. I fixed that and added tests. Thanks @avloss for reporting
true
678,791,400
https://api.github.com/repos/huggingface/datasets/issues/505
https://github.com/huggingface/datasets/pull/505
505
tmp_file referenced before assignment
closed
2
2020-08-13T23:27:33
2020-08-14T13:42:46
2020-08-14T13:42:46
avloss
[]
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
true
678,756,211
https://api.github.com/repos/huggingface/datasets/issues/504
https://github.com/huggingface/datasets/pull/504
504
Added downloading to Hyperpartisan news detection
closed
2
2020-08-13T21:53:46
2020-08-27T08:18:41
2020-08-27T08:18:41
ghomasHudson
[]
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel ! Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `de...
true
678,726,538
https://api.github.com/repos/huggingface/datasets/issues/503
https://github.com/huggingface/datasets/pull/503
503
CompGuessWhat?! 0.2.0
closed
20
2020-08-13T20:51:26
2020-10-21T06:54:29
2020-10-21T06:54:29
aleSuglia
[]
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
true
678,546,070
https://api.github.com/repos/huggingface/datasets/issues/502
https://github.com/huggingface/datasets/pull/502
502
Fix tokenizers caching
closed
1
2020-08-13T15:53:37
2020-08-19T13:37:19
2020-08-19T13:37:18
lhoestq
[]
I've found some cases where the caching didn't work properly for tokenizers: 1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions 2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates 3. if a tokenizer is u...
true
677,952,893
https://api.github.com/repos/huggingface/datasets/issues/501
https://github.com/huggingface/datasets/issues/501
501
Caching doesn't work for map (non-deterministic)
closed
4
2020-08-12T20:20:07
2022-08-08T11:02:23
2020-08-24T16:34:35
wulu473
[]
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
false
677,841,708
https://api.github.com/repos/huggingface/datasets/issues/500
https://github.com/huggingface/datasets/pull/500
500
Use hnsw in wiki_dpr
closed
0
2020-08-12T16:58:07
2020-08-20T07:59:19
2020-08-20T07:59:18
lhoestq
[]
The HNSW faiss index is much faster that regular Flat index.
true
677,709,938
https://api.github.com/repos/huggingface/datasets/issues/499
https://github.com/huggingface/datasets/pull/499
499
Narrativeqa (with full text)
closed
9
2020-08-12T13:49:43
2020-12-09T11:21:02
2020-12-09T11:21:02
ghomasHudson
[]
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset. Few notes: - Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine. - Can't get the dummy data to work. Currently putting stuff at: ...
true
677,597,479
https://api.github.com/repos/huggingface/datasets/issues/498
https://github.com/huggingface/datasets/pull/498
498
dont use beam fs to save info for local cache dir
closed
0
2020-08-12T11:00:00
2020-08-14T13:17:21
2020-08-14T13:17:20
lhoestq
[]
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
true
677,057,116
https://api.github.com/repos/huggingface/datasets/issues/497
https://github.com/huggingface/datasets/pull/497
497
skip header in PAWS-X
closed
0
2020-08-11T17:26:25
2020-08-19T09:50:02
2020-08-19T09:50:01
lhoestq
[]
This should fix #485 I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one). Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I remove...
true
677,016,998
https://api.github.com/repos/huggingface/datasets/issues/496
https://github.com/huggingface/datasets/pull/496
496
fix bad type in overflow check
closed
0
2020-08-11T16:24:58
2020-08-14T13:29:35
2020-08-14T13:29:34
lhoestq
[]
When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field. This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example). This s...
true
676,959,289
https://api.github.com/repos/huggingface/datasets/issues/495
https://github.com/huggingface/datasets/pull/495
495
stack vectors in pytorch and tensorflow
closed
0
2020-08-11T15:12:53
2020-08-12T09:30:49
2020-08-12T09:30:48
lhoestq
[]
When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`. I added support for stacked tensors for both pytorch and tensorflow. For ragged tensors, they are stack...
true
676,886,955
https://api.github.com/repos/huggingface/datasets/issues/494
https://github.com/huggingface/datasets/pull/494
494
Fix numpy stacking
closed
1
2020-08-11T13:40:30
2020-08-11T14:56:50
2020-08-11T13:49:52
lhoestq
[]
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the...
true
676,527,351
https://api.github.com/repos/huggingface/datasets/issues/493
https://github.com/huggingface/datasets/pull/493
493
Fix wmt zh-en url
closed
1
2020-08-11T02:14:52
2020-08-11T02:22:28
2020-08-11T02:22:12
sshleifer
[]
I verified that ``` wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ``` runs in 2 minutes.
true
676,495,064
https://api.github.com/repos/huggingface/datasets/issues/492
https://github.com/huggingface/datasets/issues/492
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
closed
7
2020-08-11T00:27:46
2020-08-26T16:17:19
2020-08-26T16:17:19
jarednielsen
[]
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
false
676,486,275
https://api.github.com/repos/huggingface/datasets/issues/491
https://github.com/huggingface/datasets/issues/491
491
No 0.4.0 release on GitHub
closed
2
2020-08-10T23:59:57
2020-08-11T16:50:07
2020-08-11T16:50:07
jarednielsen
[]
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
false
676,482,242
https://api.github.com/repos/huggingface/datasets/issues/490
https://github.com/huggingface/datasets/issues/490
490
Loading preprocessed Wikipedia dataset requires apache_beam
closed
0
2020-08-10T23:46:50
2020-08-14T13:17:20
2020-08-14T13:17:20
jarednielsen
[]
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in ve...
false
676,456,257
https://api.github.com/repos/huggingface/datasets/issues/489
https://github.com/huggingface/datasets/issues/489
489
ug
closed
2
2020-08-10T22:33:03
2020-08-10T22:55:14
2020-08-10T22:33:40
timothyjlaurent
[]
false
676,299,993
https://api.github.com/repos/huggingface/datasets/issues/488
https://github.com/huggingface/datasets/issues/488
488
issues with downloading datasets for wmt16 and wmt19
closed
3
2020-08-10T17:32:51
2022-10-04T17:46:59
2022-10-04T17:46:58
stas00
[]
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no...
false
676,143,029
https://api.github.com/repos/huggingface/datasets/issues/487
https://github.com/huggingface/datasets/pull/487
487
Fix elasticsearch result ids returning as strings
closed
1
2020-08-10T13:37:11
2020-08-31T10:42:46
2020-08-31T10:42:46
sai-prasanna
[]
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
true
675,649,034
https://api.github.com/repos/huggingface/datasets/issues/486
https://github.com/huggingface/datasets/issues/486
486
Bookcorpus data contains pretokenized text
closed
8
2020-08-09T06:53:24
2022-10-04T17:44:33
2022-10-04T17:44:33
orsharir
[]
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
false
675,595,393
https://api.github.com/repos/huggingface/datasets/issues/485
https://github.com/huggingface/datasets/issues/485
485
PAWS dataset first item is header
closed
0
2020-08-08T22:05:25
2020-08-19T09:50:01
2020-08-19T09:50:01
jxmorris12
[]
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names t...
false
675,088,983
https://api.github.com/repos/huggingface/datasets/issues/484
https://github.com/huggingface/datasets/pull/484
484
update mirror for RT dataset
closed
4
2020-08-07T15:25:45
2020-08-24T13:33:37
2020-08-24T13:33:37
jxmorris12
[]
true
675,080,694
https://api.github.com/repos/huggingface/datasets/issues/483
https://github.com/huggingface/datasets/issues/483
483
rotten tomatoes movie review dataset taken down
closed
3
2020-08-07T15:12:01
2020-09-08T09:36:34
2020-09-08T09:36:33
jxmorris12
[]
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
false
674,851,147
https://api.github.com/repos/huggingface/datasets/issues/482
https://github.com/huggingface/datasets/issues/482
482
Bugs : dataset.map() is frozen on ELI5
closed
8
2020-08-07T08:23:35
2023-04-06T09:39:59
2020-08-11T23:55:15
ratthachat
[]
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
false
674,567,389
https://api.github.com/repos/huggingface/datasets/issues/481
https://github.com/huggingface/datasets/pull/481
481
Apply utf-8 encoding to all datasets
closed
6
2020-08-06T20:02:09
2020-08-20T08:16:08
2020-08-20T08:16:08
lewtun
[]
## Description This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all insta...
true
674,245,959
https://api.github.com/repos/huggingface/datasets/issues/480
https://github.com/huggingface/datasets/pull/480
480
Column indexing hotfix
closed
2
2020-08-06T11:37:05
2023-09-24T09:49:33
2020-08-12T08:36:10
TevenLeScao
[]
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
true
673,905,407
https://api.github.com/repos/huggingface/datasets/issues/479
https://github.com/huggingface/datasets/pull/479
479
add METEOR metric
closed
5
2020-08-05T23:13:00
2020-08-19T13:39:09
2020-08-19T13:39:09
vegarab
[]
Added the METEOR metric. Can be used like this: ```python import nlp meteor = nlp.load_metric('metrics/meteor') meteor.compute(["some string", "some string"], ["some string", "some similar string"]) # {'meteor': 0.6411637931034483} meteor.add("some string", "some string") meteor.add('some string", "some simila...
true
673,178,317
https://api.github.com/repos/huggingface/datasets/issues/478
https://github.com/huggingface/datasets/issues/478
478
Export TFRecord to GCP bucket
closed
1
2020-08-05T01:08:32
2020-08-05T01:21:37
2020-08-05T01:21:36
astariul
[]
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset....
false
673,142,143
https://api.github.com/repos/huggingface/datasets/issues/477
https://github.com/huggingface/datasets/issues/477
477
Overview.ipynb throws exceptions with nlp 0.4.0
closed
3
2020-08-04T23:18:15
2021-08-03T06:02:15
2021-08-03T06:02:15
mandy-li
[]
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: trai...
false
672,991,854
https://api.github.com/repos/huggingface/datasets/issues/476
https://github.com/huggingface/datasets/pull/476
476
CheckList
closed
2
2020-08-04T18:32:05
2022-10-03T09:43:37
2022-10-03T09:43:37
marcotcr
[ "dataset contribution" ]
Sorry for the large pull request. - Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook - Added a checklist wrapper
true
672,884,595
https://api.github.com/repos/huggingface/datasets/issues/475
https://github.com/huggingface/datasets/pull/475
475
misc. bugs and quality of life
closed
2
2020-08-04T15:32:29
2020-08-17T21:14:08
2020-08-17T21:14:07
joeddav
[]
A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them. 1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to t...
true
672,407,330
https://api.github.com/repos/huggingface/datasets/issues/474
https://github.com/huggingface/datasets/issues/474
474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
closed
2
2020-08-03T23:46:36
2020-09-07T14:53:13
2020-09-07T14:53:13
marcotcr
[]
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa...
false
672,007,247
https://api.github.com/repos/huggingface/datasets/issues/473
https://github.com/huggingface/datasets/pull/473
473
add DoQA dataset (ACL 2020)
closed
0
2020-08-03T11:26:52
2020-09-10T17:19:11
2020-09-03T11:44:15
mariamabarham
[]
add DoQA dataset (ACL 2020) http://ixa.eus/node/12931
true
672,000,745
https://api.github.com/repos/huggingface/datasets/issues/472
https://github.com/huggingface/datasets/pull/472
472
add crd3 dataset
closed
1
2020-08-03T11:15:02
2020-08-03T11:22:10
2020-08-03T11:22:09
mariamabarham
[]
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
true
671,996,423
https://api.github.com/repos/huggingface/datasets/issues/471
https://github.com/huggingface/datasets/pull/471
471
add reuters21578 dataset
closed
0
2020-08-03T11:07:14
2022-08-04T08:39:11
2020-09-03T09:58:50
mariamabarham
[]
new PR to add the reuters21578 dataset and fix the circle CI problems. Fix partially: - #353 Subsequent PR after: - #449
true
671,952,276
https://api.github.com/repos/huggingface/datasets/issues/470
https://github.com/huggingface/datasets/pull/470
470
Adding IWSLT 2017 dataset.
closed
6
2020-08-03T09:52:39
2020-09-07T12:33:30
2020-09-07T12:33:30
Narsil
[]
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*. ``` Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair) ``` I'm unsure how to h...
true
671,876,963
https://api.github.com/repos/huggingface/datasets/issues/469
https://github.com/huggingface/datasets/issues/469
469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
closed
9
2020-08-03T07:48:29
2023-07-20T15:54:17
2023-07-20T15:54:17
Murgates
[]
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
false
671,622,441
https://api.github.com/repos/huggingface/datasets/issues/468
https://github.com/huggingface/datasets/issues/468
468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
closed
5
2020-08-02T14:05:10
2020-08-20T08:16:08
2020-08-20T08:16:08
lewtun
[]
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
false
671,580,010
https://api.github.com/repos/huggingface/datasets/issues/467
https://github.com/huggingface/datasets/pull/467
467
DOCS: Fix typo
closed
1
2020-08-02T08:59:37
2020-08-02T13:52:27
2020-08-02T09:18:54
bharatr21
[]
Fix typo from dictionnary -> dictionary
true
670,766,891
https://api.github.com/repos/huggingface/datasets/issues/466
https://github.com/huggingface/datasets/pull/466
466
[METRICS] Various improvements on metrics
closed
2
2020-08-01T11:03:45
2020-08-17T15:15:00
2020-08-17T15:14:59
thomwolf
[]
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes - Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
true
669,889,779
https://api.github.com/repos/huggingface/datasets/issues/465
https://github.com/huggingface/datasets/pull/465
465
Keep features after transform
closed
3
2020-07-31T14:43:21
2020-07-31T18:27:33
2020-07-31T18:27:32
lhoestq
[]
When applying a transform like `map`, some features were lost (and inferred features were used). It was the case for ClassLabel, Translation, etc. To fix that, I did some modifications in the `ArrowWriter`: - added the `update_features` parameter. When it's `True`, then the features specified by the user (if any...
true
669,767,381
https://api.github.com/repos/huggingface/datasets/issues/464
https://github.com/huggingface/datasets/pull/464
464
Add rename, remove and cast in-place operations
closed
0
2020-07-31T12:30:21
2020-07-31T15:50:02
2020-07-31T15:50:00
thomwolf
[]
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows th...
true
669,735,455
https://api.github.com/repos/huggingface/datasets/issues/463
https://github.com/huggingface/datasets/pull/463
463
Add dataset/mlsum
closed
3
2020-07-31T11:50:52
2020-08-24T14:54:42
2020-08-24T14:54:42
RachelKer
[]
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
true
669,715,547
https://api.github.com/repos/huggingface/datasets/issues/462
https://github.com/huggingface/datasets/pull/462
462
add DoQA (ACL 2020) dataset
closed
0
2020-07-31T11:25:56
2023-09-24T09:48:42
2020-08-03T11:28:27
mariamabarham
[]
adds DoQA (ACL 2020) dataset
true
669,703,508
https://api.github.com/repos/huggingface/datasets/issues/461
https://github.com/huggingface/datasets/pull/461
461
Doqa
closed
0
2020-07-31T11:11:12
2023-09-24T09:48:40
2020-07-31T11:13:15
mariamabarham
[]
add DoQA (ACL 2020) dataset
true
669,585,256
https://api.github.com/repos/huggingface/datasets/issues/460
https://github.com/huggingface/datasets/pull/460
460
Fix KeyboardInterrupt in map and bad indices in select
closed
2
2020-07-31T08:57:15
2020-07-31T11:32:19
2020-07-31T11:32:18
lhoestq
[]
If you interrupted a map function while it was writing, the cached file was not discarded. Therefore the next time you called map, it was loading an incomplete arrow file. We had the same issue with select if there was a bad indice at one point. To fix that I used temporary files that are renamed once everything...
true
669,545,437
https://api.github.com/repos/huggingface/datasets/issues/459
https://github.com/huggingface/datasets/pull/459
459
[Breaking] Update Dataset and DatasetDict API
closed
0
2020-07-31T08:11:33
2020-08-26T08:28:36
2020-08-26T08:28:35
thomwolf
[]
This PR contains a few breaking changes so it's probably good to keep it for the next (major) release: - rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we s...
true
668,972,666
https://api.github.com/repos/huggingface/datasets/issues/458
https://github.com/huggingface/datasets/pull/458
458
Install CoVal metric from github
closed
0
2020-07-30T16:59:25
2020-07-31T13:56:33
2020-07-31T13:56:33
yjernite
[]
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional argum...
true
668,898,386
https://api.github.com/repos/huggingface/datasets/issues/457
https://github.com/huggingface/datasets/pull/457
457
add set_format to DatasetDict + tests
closed
0
2020-07-30T15:53:20
2020-07-30T17:34:36
2020-07-30T17:34:34
thomwolf
[]
Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`. Add tests to these for `Dataset` and `DatasetDict`. Fix some bugs uncovered by the tests for `pandas` formating.
true
668,723,785
https://api.github.com/repos/huggingface/datasets/issues/456
https://github.com/huggingface/datasets/pull/456
456
add crd3(ACL 2020) dataset
closed
0
2020-07-30T13:28:35
2023-09-24T09:48:47
2020-08-03T11:28:52
mariamabarham
[]
This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020
true