id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
728,211,373
https://api.github.com/repos/huggingface/datasets/issues/756
https://github.com/huggingface/datasets/pull/756
756
Start community-provided dataset docs
closed
1
2020-10-23T13:17:41
2020-10-26T12:55:20
2020-10-26T12:55:19
sshleifer
[]
Continuation of #736 with clean fork. #### Old description This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. In slack @thomwolf called it a user-...
true
728,203,821
https://api.github.com/repos/huggingface/datasets/issues/755
https://github.com/huggingface/datasets/pull/755
755
Start community-provided dataset docs V2
closed
0
2020-10-23T13:07:30
2020-10-23T13:15:37
2020-10-23T13:15:37
sshleifer
[]
true
727,863,105
https://api.github.com/repos/huggingface/datasets/issues/754
https://github.com/huggingface/datasets/pull/754
754
Use full released xsum dataset
closed
3
2020-10-23T03:29:49
2021-01-01T03:11:56
2020-10-26T12:56:58
jbragg
[]
#672 Fix xsum to expand coverage and include IDs Code based on parser from older version of `datasets/xsum/xsum.py` @lhoestq
true
727,434,935
https://api.github.com/repos/huggingface/datasets/issues/753
https://github.com/huggingface/datasets/pull/753
753
Fix doc links to viewer
closed
0
2020-10-22T14:20:16
2020-10-23T08:42:11
2020-10-23T08:42:11
Pierrci
[]
It seems #733 forgot some links in the doc :)
true
726,917,801
https://api.github.com/repos/huggingface/datasets/issues/752
https://github.com/huggingface/datasets/issues/752
752
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
closed
2
2020-10-21T22:56:23
2020-10-22T16:19:42
2020-10-22T16:19:42
ogabrielluiz
[]
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this. Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching point...
false
726,820,191
https://api.github.com/repos/huggingface/datasets/issues/751
https://github.com/huggingface/datasets/issues/751
751
Error loading ms_marco v2.1 using load_dataset()
closed
3
2020-10-21T19:54:43
2020-11-05T01:31:57
2020-11-05T01:31:57
JainSahit
[]
Code: `dataset = load_dataset('ms_marco', 'v2.1')` Error: ``` `--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-16-34378c057212> in <module>() 9 10 # Downloading and loading a data...
false
726,589,446
https://api.github.com/repos/huggingface/datasets/issues/750
https://github.com/huggingface/datasets/issues/750
750
load_dataset doesn't include `features` in its hash
closed
0
2020-10-21T15:16:41
2020-10-29T09:36:01
2020-10-29T09:36:01
sgugger
[]
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels t...
false
726,366,062
https://api.github.com/repos/huggingface/datasets/issues/749
https://github.com/huggingface/datasets/issues/749
749
[XGLUE] Adding new dataset
closed
15
2020-10-21T10:51:36
2022-09-30T11:35:30
2021-01-06T10:02:55
patrickvonplaten
[ "dataset request" ]
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf). I'm planning on adding the dataset to the library myself in a couple of weeks. Also tagging @JetRunner @qiweizhen in case I need some guidance
false
726,196,589
https://api.github.com/repos/huggingface/datasets/issues/748
https://github.com/huggingface/datasets/pull/748
748
New version of CompGuessWhat?! with refined annotations
closed
1
2020-10-21T06:55:41
2020-10-21T08:52:42
2020-10-21T08:46:19
aleSuglia
[]
This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.
true
725,884,704
https://api.github.com/repos/huggingface/datasets/issues/747
https://github.com/huggingface/datasets/pull/747
747
Add Quail question answering dataset
closed
0
2020-10-20T19:33:14
2020-10-21T08:35:15
2020-10-21T08:35:15
sai-prasanna
[]
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversit...
true
725,627,235
https://api.github.com/repos/huggingface/datasets/issues/746
https://github.com/huggingface/datasets/pull/746
746
dataset(ngt): add ngt dataset initial loading script
closed
0
2020-10-20T14:04:58
2021-03-23T06:19:38
2021-03-23T06:19:38
AmitMY
[]
Currently only making the paths to the annotation ELAN (eaf) file and videos available. This is the first accessible way to download this dataset, which is not manual file-by-file. Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB i...
true
725,589,352
https://api.github.com/repos/huggingface/datasets/issues/745
https://github.com/huggingface/datasets/pull/745
745
Fix emotion description
closed
1
2020-10-20T13:28:39
2021-04-22T14:47:31
2020-10-21T08:38:27
lewtun
[]
Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper. I also took the liberty to make use of `ClassLabel` for the emotion labels.
true
724,918,448
https://api.github.com/repos/huggingface/datasets/issues/744
https://github.com/huggingface/datasets/issues/744
744
Dataset Explorer Doesn't Work for squad_es and squad_it
closed
1
2020-10-19T19:34:12
2020-10-26T16:36:17
2020-10-26T16:36:17
gaotongxiao
[ "nlp-viewer" ]
https://huggingface.co/nlp/viewer/?dataset=squad_es https://huggingface.co/nlp/viewer/?dataset=squad_it Both pages show "OSError: [Errno 28] No space left on device".
false
724,703,980
https://api.github.com/repos/huggingface/datasets/issues/743
https://github.com/huggingface/datasets/issues/743
743
load_dataset for CSV files not working
open
23
2020-10-19T14:53:51
2025-04-24T06:35:25
null
iliemihai
[]
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets. ` from datasets import load_dataset ` ` dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master") ` Displayed error: ` ... ArrowInva...
false
724,509,974
https://api.github.com/repos/huggingface/datasets/issues/742
https://github.com/huggingface/datasets/pull/742
742
Add OCNLI, a new CLUE dataset
closed
1
2020-10-19T11:06:33
2020-10-22T16:19:49
2020-10-22T16:19:48
JetRunner
[]
OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for Chinese Natural Language Inference, collected following closely the procedures of MNLI, but with enhanced strategies aiming for more challenging inference pairs. We want to emphasize we did not use hu...
true
723,924,275
https://api.github.com/repos/huggingface/datasets/issues/741
https://github.com/huggingface/datasets/issues/741
741
Creating dataset consumes too much memory
closed
20
2020-10-18T06:07:06
2022-02-15T17:03:10
2022-02-15T17:03:10
AmitMY
[]
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue. Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400): ```python def _generate_examples(self, base_path, split): """ Yields examp...
false
723,047,958
https://api.github.com/repos/huggingface/datasets/issues/740
https://github.com/huggingface/datasets/pull/740
740
Fix TREC urls
closed
0
2020-10-16T09:11:28
2020-10-19T08:54:37
2020-10-19T08:54:36
lhoestq
[]
The old TREC urls are now redirections. I updated the urls to the new ones, since we don't support redirections for downloads. Fix #737
true
723,044,066
https://api.github.com/repos/huggingface/datasets/issues/739
https://github.com/huggingface/datasets/pull/739
739
Add wiki dpr multiset embeddings
closed
3
2020-10-16T09:05:49
2020-11-26T14:02:50
2020-11-26T14:02:49
lhoestq
[]
There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset. Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset. In the configuration you can now specify `embeddings_nam...
true
723,033,923
https://api.github.com/repos/huggingface/datasets/issues/738
https://github.com/huggingface/datasets/pull/738
738
Replace seqeval code with original classification_report for simplicity
closed
3
2020-10-16T08:51:45
2021-01-21T16:07:15
2020-10-19T10:31:12
Hironsan
[]
Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary. This PR replaces the current code with the original function(`classification_report`) to simplify it. Also, the original code has been updated to fix #352. - Related issue: https://github.com/chakki-works/seq...
true
722,463,923
https://api.github.com/repos/huggingface/datasets/issues/737
https://github.com/huggingface/datasets/issues/737
737
Trec Dataset Connection Error
closed
1
2020-10-15T15:57:53
2020-10-19T08:54:36
2020-10-19T08:54:36
aychang95
[]
**Datasets Version:** 1.1.2 **Python Version:** 3.6/3.7 **Code:** ```python from datasets import load_dataset load_dataset("trec") ``` **Expected behavior:** Download Trec dataset and load Dataset object **Current Behavior:** Get a connection error saying it couldn't reach http://cogcomp.org/Data/...
false
722,348,191
https://api.github.com/repos/huggingface/datasets/issues/736
https://github.com/huggingface/datasets/pull/736
736
Start community-provided dataset docs
closed
5
2020-10-15T13:41:39
2020-10-23T13:15:28
2020-10-23T13:15:28
sshleifer
[]
This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. + In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`...
true
722,225,270
https://api.github.com/repos/huggingface/datasets/issues/735
https://github.com/huggingface/datasets/issues/735
735
Throw error when an unexpected key is used in data_files
closed
1
2020-10-15T10:55:27
2020-10-30T13:23:52
2020-10-30T13:23:52
BramVanroy
[]
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets =...
false
721,767,848
https://api.github.com/repos/huggingface/datasets/issues/734
https://github.com/huggingface/datasets/pull/734
734
Fix GLUE metric description
closed
0
2020-10-14T20:44:14
2020-10-15T09:27:43
2020-10-15T09:27:42
sgugger
[]
Small typo: the description says translation instead of prediction.
true
721,366,744
https://api.github.com/repos/huggingface/datasets/issues/733
https://github.com/huggingface/datasets/pull/733
733
Update link to dataset viewer
closed
0
2020-10-14T11:13:23
2020-10-14T14:07:31
2020-10-14T14:07:31
negedng
[]
Change 404 error links in quick tour to working ones
true
721,359,448
https://api.github.com/repos/huggingface/datasets/issues/732
https://github.com/huggingface/datasets/pull/732
732
dataset(wlasl): initial loading script
closed
2
2020-10-14T11:01:42
2021-03-23T06:19:43
2021-03-23T06:19:43
AmitMY
[]
takes like 9-10 hours to download all of the videos for the dataset, but it does finish :)
true
721,142,985
https://api.github.com/repos/huggingface/datasets/issues/731
https://github.com/huggingface/datasets/pull/731
731
dataset(aslg_pc12): initial loading script
closed
3
2020-10-14T05:14:37
2020-10-28T15:27:06
2020-10-28T15:27:06
AmitMY
[]
This contains the only current public part of this corpus. The rest of the corpus is not yet been made public, but this sample is still being used by researchers.
true
721,073,812
https://api.github.com/repos/huggingface/datasets/issues/730
https://github.com/huggingface/datasets/issues/730
730
Possible caching bug
closed
7
2020-10-14T02:02:34
2022-11-22T01:45:54
2020-10-29T09:36:01
ArneBinder
[ "bug" ]
The following code with `test1.txt` containing just "🤗🤗🤗": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produc...
false
719,558,876
https://api.github.com/repos/huggingface/datasets/issues/729
https://github.com/huggingface/datasets/issues/729
729
Better error message when one forgets to call `add_batch` before `compute`
closed
0
2020-10-12T17:59:22
2020-10-29T15:18:24
2020-10-29T15:18:24
sgugger
[]
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): ...
false
719,555,780
https://api.github.com/repos/huggingface/datasets/issues/728
https://github.com/huggingface/datasets/issues/728
728
Passing `cache_dir` to a metric does not work
closed
0
2020-10-12T17:55:14
2020-10-29T09:34:42
2020-10-29T09:34:42
sgugger
[]
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( ...
false
719,386,366
https://api.github.com/repos/huggingface/datasets/issues/727
https://github.com/huggingface/datasets/issues/727
727
Parallel downloads progress bar flickers
open
0
2020-10-12T13:36:05
2020-10-12T13:36:05
null
lhoestq
[]
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line. To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. Another way would be to have one "...
false
719,313,754
https://api.github.com/repos/huggingface/datasets/issues/726
https://github.com/huggingface/datasets/issues/726
726
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
closed
8
2020-10-12T11:45:10
2022-02-17T17:53:54
2022-02-15T10:38:57
SparkJiao
[]
Hi, I have encountered this problem during loading the openwebtext dataset: ``` >>> dataset = load_dataset('openwebtext') Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op...
false
718,985,641
https://api.github.com/repos/huggingface/datasets/issues/725
https://github.com/huggingface/datasets/pull/725
725
pretty print dataset objects
closed
2
2020-10-12T02:03:46
2020-10-23T16:24:35
2020-10-23T09:00:46
stas00
[]
Currently, if I do: ``` from datasets import load_dataset load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/") ``` I get: ``` DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None...
true
718,947,700
https://api.github.com/repos/huggingface/datasets/issues/724
https://github.com/huggingface/datasets/issues/724
724
need to redirect /nlp to /datasets and remove outdated info
closed
4
2020-10-11T23:12:12
2020-10-14T17:00:12
2020-10-14T17:00:12
stas00
[]
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t...
false
718,926,723
https://api.github.com/repos/huggingface/datasets/issues/723
https://github.com/huggingface/datasets/issues/723
723
Adding pseudo-labels to datasets
closed
8
2020-10-11T21:05:45
2021-08-03T05:11:51
2021-08-03T05:11:51
sshleifer
[]
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is ...
false
718,689,117
https://api.github.com/repos/huggingface/datasets/issues/722
https://github.com/huggingface/datasets/pull/722
722
datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script
closed
3
2020-10-10T19:44:08
2022-09-30T14:53:37
2022-09-30T14:53:37
AmitMY
[ "dataset contribution" ]
This is the first sign language dataset in this repo as far as I know. Following an old issue I opened https://github.com/huggingface/datasets/issues/302. I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
true
718,647,147
https://api.github.com/repos/huggingface/datasets/issues/721
https://github.com/huggingface/datasets/issues/721
721
feat(dl_manager): add support for ftp downloads
closed
11
2020-10-10T15:50:20
2022-02-15T10:44:44
2022-02-15T10:44:43
AmitMY
[]
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.do...
false
716,581,266
https://api.github.com/repos/huggingface/datasets/issues/720
https://github.com/huggingface/datasets/issues/720
720
OSError: Cannot find data file when not using the dummy dataset in RAG
closed
3
2020-10-07T14:27:13
2020-12-23T14:04:31
2020-12-23T14:04:31
josemlopez
[]
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour...
false
716,492,263
https://api.github.com/repos/huggingface/datasets/issues/719
https://github.com/huggingface/datasets/pull/719
719
Fix train_test_split output format
closed
0
2020-10-07T12:39:01
2020-10-07T13:38:08
2020-10-07T13:38:06
lhoestq
[]
There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split. This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split). This should ...
true
715,694,709
https://api.github.com/repos/huggingface/datasets/issues/718
https://github.com/huggingface/datasets/pull/718
718
Don't use tqdm 4.50.0
closed
0
2020-10-06T13:45:53
2020-10-06T13:49:24
2020-10-06T13:49:22
lhoestq
[]
tqdm 4.50.0 introduced permission errors on windows see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details. For now I just added `<4.50.0` in the setup.py Hopefully we can find what's wrong with this version soon
true
714,959,268
https://api.github.com/repos/huggingface/datasets/issues/717
https://github.com/huggingface/datasets/pull/717
717
Fixes #712 Error in the Overview.ipynb notebook
closed
0
2020-10-05T15:50:41
2020-10-06T06:31:43
2020-10-05T16:25:41
subhrm
[]
Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook
true
714,952,888
https://api.github.com/repos/huggingface/datasets/issues/716
https://github.com/huggingface/datasets/pull/716
716
Fixes #712 Attribute error in cell 3 of the overview notebook
closed
1
2020-10-05T15:42:09
2020-10-05T15:46:38
2020-10-05T15:46:32
subhrm
[]
Fixes the Attribute error in cell 3 of the overview notebook
true
714,690,192
https://api.github.com/repos/huggingface/datasets/issues/715
https://github.com/huggingface/datasets/pull/715
715
Use python read for text dataset
closed
7
2020-10-05T09:47:55
2020-10-05T13:13:18
2020-10-05T13:13:17
lhoestq
[]
As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file. Instead I switched to pure python using `open` and `read`. From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.
true
714,487,881
https://api.github.com/repos/huggingface/datasets/issues/714
https://github.com/huggingface/datasets/pull/714
714
Add the official dependabot implementation
closed
0
2020-10-05T03:49:45
2020-10-12T11:49:21
2020-10-12T11:49:21
ALazyMeme
[]
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
true
714,475,732
https://api.github.com/repos/huggingface/datasets/issues/713
https://github.com/huggingface/datasets/pull/713
713
Fix reading text files with carriage return symbols
closed
1
2020-10-05T03:07:03
2020-10-09T05:58:25
2020-10-05T13:49:29
mozharovsky
[]
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._l...
true
714,242,316
https://api.github.com/repos/huggingface/datasets/issues/712
https://github.com/huggingface/datasets/issues/712
712
Error in the notebooks/Overview.ipynb notebook
closed
2
2020-10-04T05:58:31
2020-10-05T16:25:40
2020-10-05T16:25:40
subhrm
[]
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can acc...
false
714,236,408
https://api.github.com/repos/huggingface/datasets/issues/711
https://github.com/huggingface/datasets/pull/711
711
New Update bertscore.py
closed
0
2020-10-04T05:13:09
2020-10-05T16:26:51
2020-10-05T16:26:51
DayasagarRSalian
[]
true
714,186,999
https://api.github.com/repos/huggingface/datasets/issues/710
https://github.com/huggingface/datasets/pull/710
710
fix README typos/ consistency
closed
0
2020-10-03T22:20:56
2020-10-17T09:52:45
2020-10-17T09:52:45
discdiver
[]
true
714,067,902
https://api.github.com/repos/huggingface/datasets/issues/709
https://github.com/huggingface/datasets/issues/709
709
How to use similarity settings other then "BM25" in Elasticsearch index ?
closed
1
2020-10-03T11:18:49
2022-10-04T17:19:37
2022-10-04T17:19:37
nsankar
[]
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** =...
false
714,020,953
https://api.github.com/repos/huggingface/datasets/issues/708
https://github.com/huggingface/datasets/issues/708
708
Datasets performance slow? - 6.4x slower than in memory dataset
closed
10
2020-10-03T06:44:07
2021-02-12T14:13:28
2021-02-12T14:13:28
eugeneware
[]
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
false
713,954,666
https://api.github.com/repos/huggingface/datasets/issues/707
https://github.com/huggingface/datasets/issues/707
707
Requirements should specify pyarrow<1
closed
7
2020-10-02T23:39:39
2020-12-04T08:22:39
2020-10-04T20:50:28
mathcass
[]
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
false
713,721,959
https://api.github.com/repos/huggingface/datasets/issues/706
https://github.com/huggingface/datasets/pull/706
706
Fix config creation for data files with NamedSplit
closed
0
2020-10-02T15:46:49
2020-10-05T08:15:00
2020-10-05T08:14:59
lhoestq
[]
During config creation, we need to iterate through the data files of all the splits to compute a hash. To make sure the hash is unique given a certain combination of files/splits, we sort the split names. However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort th...
true
713,709,100
https://api.github.com/repos/huggingface/datasets/issues/705
https://github.com/huggingface/datasets/issues/705
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
closed
2
2020-10-02T15:27:55
2020-10-05T08:14:59
2020-10-05T08:14:59
pvcastro
[]
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) ...
false
713,572,556
https://api.github.com/repos/huggingface/datasets/issues/704
https://github.com/huggingface/datasets/pull/704
704
Fix remote tests for new datasets
closed
0
2020-10-02T12:08:04
2020-10-02T12:12:02
2020-10-02T12:12:01
lhoestq
[]
When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet) To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch
true
713,559,718
https://api.github.com/repos/huggingface/datasets/issues/703
https://github.com/huggingface/datasets/pull/703
703
Add hotpot QA
closed
5
2020-10-02T11:44:28
2020-10-02T12:54:41
2020-10-02T12:54:41
ghomasHudson
[]
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
true
713,499,628
https://api.github.com/repos/huggingface/datasets/issues/702
https://github.com/huggingface/datasets/pull/702
702
Complete rouge kwargs
closed
0
2020-10-02T09:59:01
2020-10-02T10:11:04
2020-10-02T10:11:03
lhoestq
[]
In #701 we noticed that some kwargs were missing for rouge
true
713,485,757
https://api.github.com/repos/huggingface/datasets/issues/701
https://github.com/huggingface/datasets/pull/701
701
Add rouge 2 and rouge Lsum to rouge metric outputs
closed
1
2020-10-02T09:35:46
2020-10-02T09:55:14
2020-10-02T09:52:18
lhoestq
[]
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
true
713,450,295
https://api.github.com/repos/huggingface/datasets/issues/700
https://github.com/huggingface/datasets/pull/700
700
Add rouge-2 in rouge_types for metric calculation
closed
13
2020-10-02T08:36:45
2020-10-02T11:08:49
2020-10-02T09:59:05
Shashi456
[]
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for ...
true
713,395,642
https://api.github.com/repos/huggingface/datasets/issues/699
https://github.com/huggingface/datasets/issues/699
699
XNLI dataset is not loading
closed
3
2020-10-02T06:53:16
2020-10-03T17:45:52
2020-10-03T17:43:37
imadarsh1001
[]
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verifi...
false
712,979,029
https://api.github.com/repos/huggingface/datasets/issues/697
https://github.com/huggingface/datasets/pull/697
697
Update README.md
closed
0
2020-10-01T16:02:42
2020-10-01T16:12:00
2020-10-01T16:12:00
bishug
[]
Hey I was just telling my subscribers to check out your repositories Thank you
true
712,942,977
https://api.github.com/repos/huggingface/datasets/issues/696
https://github.com/huggingface/datasets/pull/696
696
Elasticsearch index docs
closed
0
2020-10-01T15:18:58
2020-10-02T07:48:19
2020-10-02T07:48:18
lhoestq
[]
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES...
true
712,843,949
https://api.github.com/repos/huggingface/datasets/issues/695
https://github.com/huggingface/datasets/pull/695
695
Update XNLI download link
closed
0
2020-10-01T13:27:22
2020-10-01T14:01:15
2020-10-01T14:01:14
lhoestq
[]
The old link isn't working anymore. I updated it with the new official link. Fix #690
true
712,827,751
https://api.github.com/repos/huggingface/datasets/issues/694
https://github.com/huggingface/datasets/pull/694
694
Use GitHub instead of aws in remote dataset tests
closed
0
2020-10-01T13:07:50
2020-10-02T07:47:28
2020-10-02T07:47:27
lhoestq
[]
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the ent...
true
712,822,200
https://api.github.com/repos/huggingface/datasets/issues/693
https://github.com/huggingface/datasets/pull/693
693
Rachel ker add dataset/mlsum
closed
1
2020-10-01T13:01:10
2023-09-24T09:48:23
2020-10-01T17:01:13
pdhg
[]
.
true
712,818,968
https://api.github.com/repos/huggingface/datasets/issues/692
https://github.com/huggingface/datasets/pull/692
692
Update README.md
closed
4
2020-10-01T12:57:22
2020-10-02T11:01:59
2020-10-02T11:01:59
mayank1897
[]
true
712,389,499
https://api.github.com/repos/huggingface/datasets/issues/691
https://github.com/huggingface/datasets/issues/691
691
Add UI filter to filter datasets based on task
closed
1
2020-10-01T00:56:18
2022-02-15T10:46:50
2022-02-15T10:46:50
praateekmahajan
[ "enhancement" ]
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following...
false
712,150,321
https://api.github.com/repos/huggingface/datasets/issues/690
https://github.com/huggingface/datasets/issues/690
690
XNLI dataset: NonMatchingChecksumError
closed
5
2020-09-30T17:50:03
2020-10-01T17:15:08
2020-10-01T14:01:14
xiey1
[]
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr...
false
712,095,262
https://api.github.com/repos/huggingface/datasets/issues/689
https://github.com/huggingface/datasets/pull/689
689
Switch to pandas reader for text dataset
closed
1
2020-09-30T16:28:12
2020-09-30T16:45:32
2020-09-30T16:45:31
lhoestq
[]
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text...
true
711,804,828
https://api.github.com/repos/huggingface/datasets/issues/688
https://github.com/huggingface/datasets/pull/688
688
Disable tokenizers parallelism in multiprocessed map
closed
0
2020-09-30T09:53:34
2020-10-01T08:45:46
2020-10-01T08:45:45
lhoestq
[]
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is...
true
711,664,810
https://api.github.com/repos/huggingface/datasets/issues/687
https://github.com/huggingface/datasets/issues/687
687
`ArrowInvalid` occurs while running `Dataset.map()` function
closed
2
2020-09-30T06:16:50
2020-09-30T09:53:03
2020-09-30T09:53:03
peinan
[]
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=Non...
false
711,385,739
https://api.github.com/repos/huggingface/datasets/issues/686
https://github.com/huggingface/datasets/issues/686
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
closed
2
2020-09-29T19:21:52
2021-01-08T18:29:26
2021-01-08T18:29:26
jarednielsen
[]
Might be worth updating to https://huggingface.co/datasets/viewer/
false
711,182,185
https://api.github.com/repos/huggingface/datasets/issues/685
https://github.com/huggingface/datasets/pull/685
685
Add features parameter to CSV
closed
0
2020-09-29T14:43:36
2020-09-30T08:39:56
2020-09-30T08:39:54
lhoestq
[]
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the ca...
true
711,080,947
https://api.github.com/repos/huggingface/datasets/issues/684
https://github.com/huggingface/datasets/pull/684
684
Fix column order issue in cast
closed
0
2020-09-29T12:49:13
2020-09-29T15:56:46
2020-09-29T15:56:45
lhoestq
[]
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fi...
true
710,942,704
https://api.github.com/repos/huggingface/datasets/issues/683
https://github.com/huggingface/datasets/pull/683
683
Fix wrong delimiter in text dataset
closed
0
2020-09-29T09:43:24
2021-05-05T18:24:31
2020-09-29T09:44:06
lhoestq
[]
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
true
710,325,399
https://api.github.com/repos/huggingface/datasets/issues/682
https://github.com/huggingface/datasets/pull/682
682
Update navbar chapter titles color
closed
0
2020-09-28T14:35:17
2020-09-28T17:30:13
2020-09-28T17:30:12
lhoestq
[]
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/do...
true
710,075,721
https://api.github.com/repos/huggingface/datasets/issues/681
https://github.com/huggingface/datasets/pull/681
681
Adding missing @property (+2 small flake8 fixes).
closed
0
2020-09-28T08:53:53
2020-09-28T10:26:13
2020-09-28T10:26:09
Narsil
[]
Fixes #678
true
710,066,138
https://api.github.com/repos/huggingface/datasets/issues/680
https://github.com/huggingface/datasets/pull/680
680
Fix bug related to boolean in GAP dataset.
closed
2
2020-09-28T08:39:39
2020-09-29T15:54:47
2020-09-29T15:54:47
otakumesi
[]
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-cor...
true
710,065,838
https://api.github.com/repos/huggingface/datasets/issues/679
https://github.com/huggingface/datasets/pull/679
679
Fix negative ids when slicing with an array
closed
0
2020-09-28T08:39:08
2020-09-28T14:42:20
2020-09-28T14:42:19
lhoestq
[]
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
true
710,060,497
https://api.github.com/repos/huggingface/datasets/issues/678
https://github.com/huggingface/datasets/issues/678
678
The download instructions for c4 datasets are not contained in the error message
closed
2
2020-09-28T08:30:54
2020-09-28T10:26:09
2020-09-28T10:26:09
Narsil
[]
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff...
false
710,055,239
https://api.github.com/repos/huggingface/datasets/issues/677
https://github.com/huggingface/datasets/pull/677
677
Move cache dir root creation in builder's init
closed
0
2020-09-28T08:22:46
2020-09-28T14:42:43
2020-09-28T14:42:42
lhoestq
[]
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
true
710,014,319
https://api.github.com/repos/huggingface/datasets/issues/676
https://github.com/huggingface/datasets/issues/676
676
train_test_split returns empty dataset item
closed
4
2020-09-28T07:19:33
2020-10-07T13:46:33
2020-10-07T13:38:06
mojave-pku
[]
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) pri...
false
709,818,725
https://api.github.com/repos/huggingface/datasets/issues/675
https://github.com/huggingface/datasets/issues/675
675
Add custom dataset to NLP?
closed
2
2020-09-27T21:22:50
2020-10-20T09:08:49
2020-10-20T09:08:49
timpal0l
[]
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
false
709,661,006
https://api.github.com/repos/huggingface/datasets/issues/674
https://github.com/huggingface/datasets/issues/674
674
load_dataset() won't download in Windows
closed
3
2020-09-27T03:56:25
2020-10-05T08:28:18
2020-10-05T08:28:18
ThisDavehead
[]
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
false
709,603,989
https://api.github.com/repos/huggingface/datasets/issues/673
https://github.com/huggingface/datasets/issues/673
673
blog_authorship_corpus crashed
closed
1
2020-09-26T20:15:28
2022-02-15T10:47:58
2022-02-15T10:47:58
Moshiii
[ "nlp-viewer" ]
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
false
709,575,527
https://api.github.com/repos/huggingface/datasets/issues/672
https://github.com/huggingface/datasets/issues/672
672
Questions about XSUM
closed
14
2020-09-26T17:16:24
2022-10-04T17:30:17
2022-10-04T17:30:17
danyaljj
[]
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
false
709,093,151
https://api.github.com/repos/huggingface/datasets/issues/671
https://github.com/huggingface/datasets/issues/671
671
[BUG] No such file or directory
closed
0
2020-09-25T16:38:54
2020-09-28T14:42:42
2020-09-28T14:42:42
jbragg
[]
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested o...
false
709,061,231
https://api.github.com/repos/huggingface/datasets/issues/670
https://github.com/huggingface/datasets/pull/670
670
Fix SQuAD metric kwargs description
closed
0
2020-09-25T16:08:57
2020-09-29T15:57:39
2020-09-29T15:57:38
lhoestq
[]
The `answer_start` field was missing in the kwargs docstring. This should fix #657 FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field. However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I th...
true
708,857,595
https://api.github.com/repos/huggingface/datasets/issues/669
https://github.com/huggingface/datasets/issues/669
669
How to skip a example when running dataset.map
closed
3
2020-09-25T11:17:53
2022-06-17T21:45:03
2020-10-05T16:28:13
xixiaoyao
[]
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
false
708,310,956
https://api.github.com/repos/huggingface/datasets/issues/668
https://github.com/huggingface/datasets/issues/668
668
OverflowError when slicing with an array containing negative ids
closed
0
2020-09-24T16:27:14
2020-09-28T14:42:19
2020-09-28T14:42:19
lhoestq
[]
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError ...
false
708,258,392
https://api.github.com/repos/huggingface/datasets/issues/667
https://github.com/huggingface/datasets/issues/667
667
Loss not decrease with Datasets and Transformers
closed
2
2020-09-24T15:14:43
2021-01-01T20:01:25
2021-01-01T20:01:25
wangcongcong123
[]
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data...
false
707,608,578
https://api.github.com/repos/huggingface/datasets/issues/666
https://github.com/huggingface/datasets/issues/666
666
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
closed
1
2020-09-23T19:02:25
2020-10-27T15:19:25
2020-10-27T15:19:25
wahab4114
[]
false
707,037,738
https://api.github.com/repos/huggingface/datasets/issues/665
https://github.com/huggingface/datasets/issues/665
665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
closed
8
2020-09-23T04:28:14
2020-10-08T09:32:16
2020-10-08T09:32:16
xixiaoyao
[]
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
false
707,017,791
https://api.github.com/repos/huggingface/datasets/issues/664
https://github.com/huggingface/datasets/issues/664
664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
closed
4
2020-09-23T03:53:36
2023-04-17T09:31:20
2020-10-20T09:06:13
xixiaoyao
[]
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ...
false
706,732,636
https://api.github.com/repos/huggingface/datasets/issues/663
https://github.com/huggingface/datasets/pull/663
663
Created dataset card snli.md
closed
11
2020-09-22T22:29:37
2020-10-13T17:05:20
2020-10-12T20:26:52
mcmillanmajora
[ "Dataset discussion" ]
First draft of a dataset card using the SNLI corpus as an example. This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around. - I moved **Who Was Involved** to follow **Language**, ...
true
706,689,866
https://api.github.com/repos/huggingface/datasets/issues/662
https://github.com/huggingface/datasets/pull/662
662
Created dataset card snli.md
closed
1
2020-09-22T21:00:17
2023-09-24T09:50:16
2020-09-22T21:26:21
mcmillanmajora
[ "Dataset discussion" ]
First draft of a dataset card using the SNLI corpus as an example
true
706,465,936
https://api.github.com/repos/huggingface/datasets/issues/661
https://github.com/huggingface/datasets/pull/661
661
Replace pa.OSFile by open
closed
0
2020-09-22T15:05:59
2021-05-05T18:24:36
2020-09-22T15:15:25
lhoestq
[]
It should fix #643
true
706,324,032
https://api.github.com/repos/huggingface/datasets/issues/660
https://github.com/huggingface/datasets/pull/660
660
add openwebtext
closed
3
2020-09-22T12:05:22
2020-10-06T09:20:10
2020-09-28T09:07:26
richarddwang
[]
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset buildin...
true
706,231,506
https://api.github.com/repos/huggingface/datasets/issues/659
https://github.com/huggingface/datasets/pull/659
659
Keep new columns in transmit format
closed
0
2020-09-22T09:47:23
2020-09-22T10:07:22
2020-09-22T10:07:20
lhoestq
[]
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
true
706,206,247
https://api.github.com/repos/huggingface/datasets/issues/658
https://github.com/huggingface/datasets/pull/658
658
Fix squad metric's Features
closed
1
2020-09-22T09:09:52
2020-09-29T15:58:30
2020-09-29T15:58:30
tshrjn
[]
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
true
706,204,383
https://api.github.com/repos/huggingface/datasets/issues/657
https://github.com/huggingface/datasets/issues/657
657
Squad Metric Description & Feature Mismatch
closed
2
2020-09-22T09:07:00
2020-10-13T02:16:56
2020-09-29T15:57:38
tshrjn
[]
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
false
705,736,319
https://api.github.com/repos/huggingface/datasets/issues/656
https://github.com/huggingface/datasets/pull/656
656
Use multiprocess from pathos for multiprocessing
closed
4
2020-09-21T16:12:19
2020-09-28T14:45:40
2020-09-28T14:45:39
lhoestq
[]
[Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map. It was suggested to use it by @kandorm. We're already using dill which is its only dependency.
true