id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
819,500,620
https://api.github.com/repos/huggingface/datasets/issues/1970
https://github.com/huggingface/datasets/pull/1970
1,970
Fixing the URL filtering for bad MLSUM examples in GEM
closed
0
2021-03-02T01:22:58
2021-03-02T03:19:06
2021-03-02T02:01:33
yjernite
[]
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r cc @sebastianGehrmann
true
819,129,568
https://api.github.com/repos/huggingface/datasets/issues/1967
https://github.com/huggingface/datasets/pull/1967
1,967
Add Turkish News Category Dataset - 270K - Lite Version
closed
1
2021-03-01T18:21:59
2021-03-02T17:25:00
2021-03-02T17:25:00
yavuzKomecoglu
[]
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains...
true
819,101,253
https://api.github.com/repos/huggingface/datasets/issues/1966
https://github.com/huggingface/datasets/pull/1966
1,966
Fix metrics collision in separate multiprocessed experiments
closed
1
2021-03-01T17:45:18
2021-03-02T13:05:45
2021-03-02T13:05:44
lhoestq
[]
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup. Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the cor...
true
818,833,460
https://api.github.com/repos/huggingface/datasets/issues/1965
https://github.com/huggingface/datasets/issues/1965
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
closed
3
2021-03-01T12:47:34
2021-03-04T19:40:56
2021-03-04T19:40:42
shamanez
[]
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process...
false
818,624,864
https://api.github.com/repos/huggingface/datasets/issues/1964
https://github.com/huggingface/datasets/issues/1964
1,964
Datasets.py function load_dataset does not match squad dataset
closed
6
2021-03-01T08:41:31
2022-10-05T13:09:47
2022-10-05T13:09:47
LeopoldACC
[]
### 1 When I try to train lxmert,and follow the code in README that --dataset name: ```shell python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len...
false
818,289,967
https://api.github.com/repos/huggingface/datasets/issues/1963
https://github.com/huggingface/datasets/issues/1963
1,963
bug in SNLI dataset
closed
1
2021-02-28T19:36:20
2022-10-05T13:13:46
2022-10-05T13:13:46
dorost1234
[]
Hi There is label of -1 in train set of SNLI dataset, please find the code below: ``` import numpy as np import datasets data = datasets.load_dataset("snli")["train"] labels = [] for d in data: labels.append(d["label"]) print(np.unique(labels)) ``` and results: `[-1 0 1 2]` version of datas...
false
818,089,156
https://api.github.com/repos/huggingface/datasets/issues/1962
https://github.com/huggingface/datasets/pull/1962
1,962
Fix unused arguments
closed
3
2021-02-28T02:47:07
2021-03-11T02:18:17
2021-03-03T16:37:50
mariosasko
[]
Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.
true
818,077,947
https://api.github.com/repos/huggingface/datasets/issues/1961
https://github.com/huggingface/datasets/pull/1961
1,961
Add sst dataset
closed
0
2021-02-28T02:08:29
2021-03-04T10:38:53
2021-03-04T10:38:53
patpizio
[]
Related to #1934—Add the Stanford Sentiment Treebank dataset.
true
818,073,154
https://api.github.com/repos/huggingface/datasets/issues/1960
https://github.com/huggingface/datasets/pull/1960
1,960
Allow stateful function in dataset.map
closed
3
2021-02-28T01:29:05
2021-03-23T15:26:49
2021-03-23T15:26:49
mariosasko
[]
Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example. Fixes #1940 @lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this.
true
818,055,644
https://api.github.com/repos/huggingface/datasets/issues/1959
https://github.com/huggingface/datasets/issues/1959
1,959
Bug in skip_rows argument of load_dataset function ?
closed
1
2021-02-27T23:32:54
2021-03-09T10:21:32
2021-03-09T10:21:32
LedaguenelArthur
[]
Hello everyone, I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/ I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names `t...
false
818,037,548
https://api.github.com/repos/huggingface/datasets/issues/1958
https://github.com/huggingface/datasets/issues/1958
1,958
XSum dataset download link broken
closed
1
2021-02-27T21:47:56
2021-02-27T21:50:16
2021-02-27T21:50:16
himat
[]
I did ``` from datasets import load_dataset dataset = load_dataset("xsum") ``` This returns `ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz`
false
818,013,741
https://api.github.com/repos/huggingface/datasets/issues/1956
https://github.com/huggingface/datasets/issues/1956
1,956
[distributed env] potentially unsafe parallel execution
closed
2
2021-02-27T20:38:45
2021-03-01T17:24:42
2021-03-01T17:24:42
stas00
[]
``` metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank) ``` presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu...
false
818,010,664
https://api.github.com/repos/huggingface/datasets/issues/1955
https://github.com/huggingface/datasets/pull/1955
1,955
typos + grammar
closed
0
2021-02-27T20:21:43
2021-03-01T17:20:38
2021-03-01T14:43:19
stas00
[]
This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability. N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and no...
true
817,565,563
https://api.github.com/repos/huggingface/datasets/issues/1954
https://github.com/huggingface/datasets/issues/1954
1,954
add a new column
closed
2
2021-02-26T18:17:27
2021-04-29T14:50:43
2021-04-29T14:50:43
dorost1234
[]
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
false
817,498,869
https://api.github.com/repos/huggingface/datasets/issues/1953
https://github.com/huggingface/datasets/pull/1953
1,953
Documentation for to_csv, to_pandas and to_dict
closed
0
2021-02-26T16:35:49
2021-03-01T14:03:48
2021-03-01T14:03:47
lhoestq
[]
I added these methods to the documentation with a small paragraph. I also fixed some formatting issues in the docstrings
true
817,428,160
https://api.github.com/repos/huggingface/datasets/issues/1952
https://github.com/huggingface/datasets/pull/1952
1,952
Handle timeouts
closed
4
2021-02-26T15:02:07
2021-03-01T14:29:24
2021-03-01T14:29:24
lhoestq
[]
As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset. This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00 I added a default timeout, and included an option to our offline environment for tests to...
true
817,423,573
https://api.github.com/repos/huggingface/datasets/issues/1951
https://github.com/huggingface/datasets/pull/1951
1,951
Add cross-platform support for datasets-cli
closed
1
2021-02-26T14:56:25
2021-03-11T02:18:26
2021-02-26T15:30:26
mariosasko
[]
One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `en...
true
817,295,235
https://api.github.com/repos/huggingface/datasets/issues/1950
https://github.com/huggingface/datasets/pull/1950
1,950
updated multi_nli dataset with missing fields
closed
0
2021-02-26T11:54:36
2021-03-01T11:08:30
2021-03-01T11:08:29
bhavitvyamalik
[]
1) updated fields which were missing earlier 2) added tags to README 3) updated a few fields of README 4) new dataset_infos.json and dummy files
true
816,986,936
https://api.github.com/repos/huggingface/datasets/issues/1949
https://github.com/huggingface/datasets/issues/1949
1,949
Enable Fast Filtering using Arrow Dataset
open
2
2021-02-26T02:53:37
2021-02-26T19:18:29
null
gchhablani
[]
Hi @lhoestq, As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble...
false
816,689,329
https://api.github.com/repos/huggingface/datasets/issues/1948
https://github.com/huggingface/datasets/issues/1948
1,948
dataset loading logger level
closed
3
2021-02-25T18:33:37
2023-07-12T17:19:30
2023-07-12T17:19:30
stas00
[]
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`: ``` WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow WARNING:datasets.arr...
false
816,590,299
https://api.github.com/repos/huggingface/datasets/issues/1947
https://github.com/huggingface/datasets/pull/1947
1,947
Update documentation with not in place transforms and update DatasetDict
closed
0
2021-02-25T16:23:18
2021-03-01T14:36:54
2021-03-01T14:36:53
lhoestq
[]
In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`. I added them to the documentation and added a paragraph on how to use them You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-r...
true
816,526,294
https://api.github.com/repos/huggingface/datasets/issues/1946
https://github.com/huggingface/datasets/pull/1946
1,946
Implement Dataset from CSV
closed
3
2021-02-25T15:10:13
2021-03-12T09:42:48
2021-03-12T09:42:48
albertvillanova
[]
Implement `Dataset.from_csv`. Analogue to #1943. If finally, the scripts should be used instead, at least we can reuse the tests here.
true
816,421,966
https://api.github.com/repos/huggingface/datasets/issues/1945
https://github.com/huggingface/datasets/issues/1945
1,945
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
closed
1
2021-02-25T13:09:45
2021-02-25T13:20:35
2021-02-25T13:20:26
dorost1234
[]
Hi I am trying to concatenate a list of huggingface datastes as: ` train_dataset = datasets.concatenate_datasets(train_datasets) ` Here is the `train_datasets` when I print: ``` [Dataset({ features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'], num_rows...
false
816,267,216
https://api.github.com/repos/huggingface/datasets/issues/1944
https://github.com/huggingface/datasets/pull/1944
1,944
Add Turkish News Category Dataset (270K - Lite Version)
closed
2
2021-02-25T09:45:22
2021-03-02T17:46:41
2021-03-01T18:23:21
yavuzKomecoglu
[]
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contai...
true
816,160,453
https://api.github.com/repos/huggingface/datasets/issues/1943
https://github.com/huggingface/datasets/pull/1943
1,943
Implement Dataset from JSON and JSON Lines
closed
11
2021-02-25T07:17:33
2021-03-18T09:42:08
2021-03-18T09:42:08
albertvillanova
[]
Implement `Dataset.from_jsonl`.
true
816,037,520
https://api.github.com/repos/huggingface/datasets/issues/1942
https://github.com/huggingface/datasets/issues/1942
1,942
[experiment] missing default_experiment-1-0.arrow
closed
18
2021-02-25T03:02:15
2022-10-05T13:08:45
2022-10-05T13:08:45
stas00
[]
the original report was pretty bad and incomplete - my apologies! Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481 ------------ As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/...
false
815,985,167
https://api.github.com/repos/huggingface/datasets/issues/1941
https://github.com/huggingface/datasets/issues/1941
1,941
Loading of FAISS index fails for index_name = 'exact'
closed
3
2021-02-25T01:30:54
2021-02-25T14:28:46
2021-02-25T14:28:46
mkserge
[]
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source o...
false
815,770,012
https://api.github.com/repos/huggingface/datasets/issues/1940
https://github.com/huggingface/datasets/issues/1940
1,940
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
closed
2
2021-02-24T19:18:56
2021-03-23T15:26:49
2021-03-23T15:26:49
francisco-perez-sorrosal
[ "enhancement" ]
Hi there! In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end: ```python ...
false
815,680,510
https://api.github.com/repos/huggingface/datasets/issues/1939
https://github.com/huggingface/datasets/issues/1939
1,939
[firewalled env] OFFLINE mode
closed
7
2021-02-24T17:13:42
2021-03-05T05:09:54
2021-03-05T05:09:54
stas00
[]
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls. I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos...
false
815,647,774
https://api.github.com/repos/huggingface/datasets/issues/1938
https://github.com/huggingface/datasets/pull/1938
1,938
Disallow ClassLabel with no names
closed
0
2021-02-24T16:37:57
2021-02-25T11:27:29
2021-02-25T11:27:29
lhoestq
[]
It was possible to create a ClassLabel without specifying the names or the number of classes. This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str. cc @justin-yan
true
815,163,943
https://api.github.com/repos/huggingface/datasets/issues/1937
https://github.com/huggingface/datasets/issues/1937
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
closed
2
2021-02-24T06:47:33
2021-02-26T11:10:06
2021-02-26T11:10:06
yuchenlin
[ "nlp-viewer" ]
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
false
814,726,512
https://api.github.com/repos/huggingface/datasets/issues/1936
https://github.com/huggingface/datasets/pull/1936
1,936
[WIP] Adding Support for Reading Pandas Category
closed
6
2021-02-23T18:32:54
2022-03-09T18:46:22
2022-03-09T18:46:22
justin-yan
[]
@lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014 The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category. Just the 4 line change below actually does seem to work: ``` >>> from datasets import Data...
true
814,623,827
https://api.github.com/repos/huggingface/datasets/issues/1935
https://github.com/huggingface/datasets/pull/1935
1,935
add CoVoST2
closed
1
2021-02-23T16:28:16
2021-02-24T18:09:32
2021-02-24T18:05:09
patil-suraj
[]
This PR adds the CoVoST2 dataset for speech translation and ASR. https://github.com/facebookresearch/covost#covost-2 The dataset requires manual download as the download page requests an email address and the URLs are temporary. The dummy data is a bit bigger because of the mp3 files and 36 configs.
true
814,437,190
https://api.github.com/repos/huggingface/datasets/issues/1934
https://github.com/huggingface/datasets/issues/1934
1,934
Add Stanford Sentiment Treebank (SST)
closed
1
2021-02-23T12:53:16
2021-03-18T17:51:44
2021-03-18T17:51:44
patpizio
[ "dataset request" ]
I am going to add SST: - **Name:** The Stanford Sentiment Treebank - **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank...
false
814,335,846
https://api.github.com/repos/huggingface/datasets/issues/1933
https://github.com/huggingface/datasets/pull/1933
1,933
Use arrow ipc file format
closed
3
2021-02-23T10:38:24
2023-10-30T16:20:19
2023-09-25T09:20:38
lhoestq
[]
According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample: > We define a “file format” supporting random access that is build with the stream format. The file ...
true
814,326,116
https://api.github.com/repos/huggingface/datasets/issues/1932
https://github.com/huggingface/datasets/pull/1932
1,932
Fix builder config creation with data_dir
closed
0
2021-02-23T10:26:02
2021-02-23T10:45:28
2021-02-23T10:45:27
lhoestq
[]
The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custo...
true
814,225,074
https://api.github.com/repos/huggingface/datasets/issues/1931
https://github.com/huggingface/datasets/pull/1931
1,931
add m_lama (multilingual lama) dataset
closed
3
2021-02-23T08:11:57
2021-03-01T10:01:03
2021-03-01T10:01:03
pdufter
[]
Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf
true
814,055,198
https://api.github.com/repos/huggingface/datasets/issues/1930
https://github.com/huggingface/datasets/pull/1930
1,930
updated the wino_bias dataset
closed
3
2021-02-23T03:07:40
2021-04-07T15:24:56
2021-04-07T15:24:56
JieyuZhao
[]
Updated the wino_bias.py script. - updated the data_url - added different configurations for different data splits - added the coreference_cluster to the data features
true
813,929,669
https://api.github.com/repos/huggingface/datasets/issues/1929
https://github.com/huggingface/datasets/pull/1929
1,929
Improve typing and style and fix some inconsistencies
closed
2
2021-02-22T22:47:41
2021-02-24T16:16:14
2021-02-24T14:03:54
mariosasko
[]
This PR: * improves typing (mostly more consistent use of `typing.Optional`) * `DatasetDict.cleanup_cache_files` now correctly returns a dict * replaces `dict()` with the corresponding literal * uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying
true
813,793,434
https://api.github.com/repos/huggingface/datasets/issues/1928
https://github.com/huggingface/datasets/pull/1928
1,928
Updating old cards
closed
0
2021-02-22T19:26:04
2021-02-23T18:19:25
2021-02-23T18:19:25
mcmillanmajora
[]
Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli)...
true
813,768,935
https://api.github.com/repos/huggingface/datasets/issues/1927
https://github.com/huggingface/datasets/pull/1927
1,927
Update dataset card of wino_bias
closed
1
2021-02-22T18:51:34
2022-09-23T13:35:09
2022-09-23T13:35:08
JieyuZhao
[ "dataset contribution" ]
Updated the info for the wino_bias dataset.
true
813,607,994
https://api.github.com/repos/huggingface/datasets/issues/1926
https://github.com/huggingface/datasets/pull/1926
1,926
Fix: Wiki_dpr - add missing scalar quantizer
closed
0
2021-02-22T15:32:05
2021-02-22T15:49:54
2021-02-22T15:49:53
lhoestq
[]
All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done. The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG. The quantizer reduces the size of the index a lot but increases index b...
true
813,600,902
https://api.github.com/repos/huggingface/datasets/issues/1925
https://github.com/huggingface/datasets/pull/1925
1,925
Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index"
closed
1
2021-02-22T15:23:46
2021-02-25T01:33:48
2021-02-22T15:36:08
lhoestq
[]
Fix the bugs noticed in #1915 There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`). Another issue was that s...
true
813,599,733
https://api.github.com/repos/huggingface/datasets/issues/1924
https://github.com/huggingface/datasets/issues/1924
1,924
Anonymous Dataset Addition (i.e Anonymous PR?)
closed
4
2021-02-22T15:22:30
2022-10-05T13:07:11
2022-10-05T13:07:11
PierreColombo
[]
Hello, Thanks a lot for your librairy. We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? Cheers @eusip
false
813,363,472
https://api.github.com/repos/huggingface/datasets/issues/1923
https://github.com/huggingface/datasets/pull/1923
1,923
Fix save_to_disk with relative path
closed
0
2021-02-22T10:27:19
2021-02-22T11:22:44
2021-02-22T11:22:43
lhoestq
[]
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added...
true
813,140,806
https://api.github.com/repos/huggingface/datasets/issues/1922
https://github.com/huggingface/datasets/issues/1922
1,922
How to update the "wino_bias" dataset
open
1
2021-02-22T05:39:39
2021-02-22T10:35:59
null
JieyuZhao
[]
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
false
812,716,042
https://api.github.com/repos/huggingface/datasets/issues/1921
https://github.com/huggingface/datasets/pull/1921
1,921
Standardizing datasets dtypes
closed
1
2021-02-20T22:04:01
2021-02-22T09:44:10
2021-02-22T09:44:10
justin-yan
[]
This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets. This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since ...
true
812,628,220
https://api.github.com/repos/huggingface/datasets/issues/1920
https://github.com/huggingface/datasets/pull/1920
1,920
Fix save_to_disk issue
closed
2
2021-02-20T14:22:39
2021-02-22T10:30:11
2021-02-22T10:30:11
M-Salti
[]
Fixes #1919
true
812,626,872
https://api.github.com/repos/huggingface/datasets/issues/1919
https://github.com/huggingface/datasets/issues/1919
1,919
Failure to save with save_to_disk
closed
2
2021-02-20T14:18:10
2021-03-03T17:40:27
2021-03-03T17:40:27
M-Salti
[]
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load...
false
812,541,510
https://api.github.com/repos/huggingface/datasets/issues/1918
https://github.com/huggingface/datasets/pull/1918
1,918
Fix QA4MRE download URLs
closed
0
2021-02-20T07:32:17
2021-02-22T13:35:06
2021-02-22T13:35:06
M-Salti
[]
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
true
812,390,178
https://api.github.com/repos/huggingface/datasets/issues/1917
https://github.com/huggingface/datasets/issues/1917
1,917
UnicodeDecodeError: windows 10 machine
closed
1
2021-02-19T22:13:05
2021-02-19T22:41:11
2021-02-19T22:40:28
yosiasz
[]
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.er...
false
812,291,984
https://api.github.com/repos/huggingface/datasets/issues/1916
https://github.com/huggingface/datasets/pull/1916
1,916
Remove unused py_utils objects
closed
3
2021-02-19T19:51:25
2021-02-22T14:56:56
2021-02-22T13:32:49
albertvillanova
[]
Remove unused/unnecessary py_utils functions/classes.
true
812,229,654
https://api.github.com/repos/huggingface/datasets/issues/1915
https://github.com/huggingface/datasets/issues/1915
1,915
Unable to download `wiki_dpr`
closed
3
2021-02-19T18:11:32
2021-03-03T17:40:48
2021-03-03T17:40:48
nitarakad
[]
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.i...
false
812,149,201
https://api.github.com/repos/huggingface/datasets/issues/1914
https://github.com/huggingface/datasets/pull/1914
1,914
Fix logging imports and make all datasets use library logger
closed
0
2021-02-19T16:12:34
2021-02-21T19:48:03
2021-02-21T19:48:03
albertvillanova
[]
Fix library relative logging imports and make all datasets use library logger.
true
812,127,307
https://api.github.com/repos/huggingface/datasets/issues/1913
https://github.com/huggingface/datasets/pull/1913
1,913
Add keep_linebreaks parameter to text loader
closed
3
2021-02-19T15:43:45
2021-02-19T18:36:12
2021-02-19T18:36:11
lhoestq
[]
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
true
812,034,140
https://api.github.com/repos/huggingface/datasets/issues/1912
https://github.com/huggingface/datasets/pull/1912
1,912
Update: WMT - use mirror links
closed
3
2021-02-19T13:42:34
2021-02-24T13:44:53
2021-02-24T13:44:53
lhoestq
[]
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
true
812,009,956
https://api.github.com/repos/huggingface/datasets/issues/1911
https://github.com/huggingface/datasets/issues/1911
1,911
Saving processed dataset running infinitely
open
6
2021-02-19T13:09:19
2021-02-23T07:34:44
null
ayubSubhaniya
[]
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table func...
false
811,697,108
https://api.github.com/repos/huggingface/datasets/issues/1910
https://github.com/huggingface/datasets/pull/1910
1,910
Adding CoNLLpp dataset.
closed
1
2021-02-19T05:12:30
2021-03-04T22:02:47
2021-03-04T22:02:47
ZihanWangKi
[]
true
811,520,569
https://api.github.com/repos/huggingface/datasets/issues/1907
https://github.com/huggingface/datasets/issues/1907
1,907
DBPedia14 Dataset Checksum bug?
closed
2
2021-02-18T22:25:48
2021-02-22T23:22:05
2021-02-22T23:22:04
francisco-perez-sorrosal
[]
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, i...
false
811,405,274
https://api.github.com/repos/huggingface/datasets/issues/1906
https://github.com/huggingface/datasets/issues/1906
1,906
Feature Request: Support for Pandas `Categorical`
open
3
2021-02-18T19:46:05
2021-02-23T14:38:50
null
justin-yan
[ "enhancement", "generic discussion" ]
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_...
false
811,384,174
https://api.github.com/repos/huggingface/datasets/issues/1905
https://github.com/huggingface/datasets/pull/1905
1,905
Standardizing datasets.dtypes
closed
1
2021-02-18T19:15:31
2021-02-20T22:01:30
2021-02-20T22:01:30
justin-yan
[]
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This...
true
811,260,904
https://api.github.com/repos/huggingface/datasets/issues/1904
https://github.com/huggingface/datasets/pull/1904
1,904
Fix to_pandas for boolean ArrayXD
closed
1
2021-02-18T16:30:46
2021-02-18T17:10:03
2021-02-18T17:10:01
lhoestq
[]
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pya...
true
811,145,531
https://api.github.com/repos/huggingface/datasets/issues/1903
https://github.com/huggingface/datasets/pull/1903
1,903
Initial commit for the addition of TIMIT dataset
closed
2
2021-02-18T14:23:12
2021-03-01T09:39:12
2021-03-01T09:39:12
vrindaprabhu
[]
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten ...
true
810,931,171
https://api.github.com/repos/huggingface/datasets/issues/1902
https://github.com/huggingface/datasets/pull/1902
1,902
Fix setimes_2 wmt urls
closed
0
2021-02-18T09:42:26
2021-02-18T09:55:41
2021-02-18T09:55:41
lhoestq
[]
Continuation of #1901 Some other urls were missing https
true
810,845,605
https://api.github.com/repos/huggingface/datasets/issues/1901
https://github.com/huggingface/datasets/pull/1901
1,901
Fix OPUS dataset download errors
closed
0
2021-02-18T07:39:41
2021-02-18T15:07:20
2021-02-18T09:39:21
YangWang92
[]
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
true
810,512,488
https://api.github.com/repos/huggingface/datasets/issues/1900
https://github.com/huggingface/datasets/pull/1900
1,900
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
closed
1
2021-02-17T20:26:04
2021-02-19T18:27:11
2021-02-19T18:27:11
justin-yan
[]
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float t...
true
810,308,332
https://api.github.com/repos/huggingface/datasets/issues/1899
https://github.com/huggingface/datasets/pull/1899
1,899
Fix: ALT - fix duplicated examples in alt-parallel
closed
0
2021-02-17T15:53:56
2021-02-17T17:20:49
2021-02-17T17:20:49
lhoestq
[]
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
true
810,157,251
https://api.github.com/repos/huggingface/datasets/issues/1898
https://github.com/huggingface/datasets/issues/1898
1,898
ALT dataset has repeating instances in all splits
closed
4
2021-02-17T12:51:42
2021-02-19T06:18:46
2021-02-19T06:18:46
10-zin
[ "dataset bug" ]
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `exp...
false
810,113,263
https://api.github.com/repos/huggingface/datasets/issues/1897
https://github.com/huggingface/datasets/pull/1897
1,897
Fix PandasArrayExtensionArray conversion to native type
closed
0
2021-02-17T11:48:24
2021-02-17T13:15:16
2021-02-17T13:15:15
lhoestq
[]
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna metho...
true
809,630,271
https://api.github.com/repos/huggingface/datasets/issues/1895
https://github.com/huggingface/datasets/issues/1895
1,895
Bug Report: timestamp[ns] not recognized
closed
5
2021-02-16T20:38:04
2021-02-19T18:27:11
2021-02-19T18:27:11
justin-yan
[]
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The fact...
false
809,609,654
https://api.github.com/repos/huggingface/datasets/issues/1894
https://github.com/huggingface/datasets/issues/1894
1,894
benchmarking against MMapIndexedDataset
open
3
2021-02-16T20:04:58
2021-02-17T18:52:28
null
sshleifer
[]
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o...
false
809,556,503
https://api.github.com/repos/huggingface/datasets/issues/1893
https://github.com/huggingface/datasets/issues/1893
1,893
wmt19 is broken
closed
2
2021-02-16T18:39:58
2021-03-03T17:42:02
2021-03-03T17:42:02
stas00
[ "dataset bug" ]
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent c...
false
809,554,174
https://api.github.com/repos/huggingface/datasets/issues/1892
https://github.com/huggingface/datasets/issues/1892
1,892
request to mirror wmt datasets, as they are really slow to download
closed
6
2021-02-16T18:36:11
2021-10-26T06:55:42
2021-03-25T11:53:23
stas00
[]
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
false
809,550,001
https://api.github.com/repos/huggingface/datasets/issues/1891
https://github.com/huggingface/datasets/issues/1891
1,891
suggestion to improve a missing dataset error
closed
1
2021-02-16T18:29:13
2022-10-05T12:48:38
2022-10-05T12:48:38
stas00
[]
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): ...
false
809,395,586
https://api.github.com/repos/huggingface/datasets/issues/1890
https://github.com/huggingface/datasets/pull/1890
1,890
Reformat dataset cards section titles
closed
0
2021-02-16T15:11:47
2021-02-16T15:12:34
2021-02-16T15:12:33
lhoestq
[]
Titles are formatted like [Foo](#foo) instead of just Foo
true
809,276,015
https://api.github.com/repos/huggingface/datasets/issues/1889
https://github.com/huggingface/datasets/pull/1889
1,889
Implement to_dict and to_pandas for Dataset
closed
1
2021-02-16T12:38:19
2021-02-18T18:42:37
2021-02-18T18:42:34
SBrandeis
[]
With options to return a generator or the full dataset
true
809,241,123
https://api.github.com/repos/huggingface/datasets/issues/1888
https://github.com/huggingface/datasets/pull/1888
1,888
Docs for adding new column on formatted dataset
closed
1
2021-02-16T11:45:00
2021-03-30T14:01:03
2021-02-16T11:58:57
lhoestq
[]
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
true
809,229,809
https://api.github.com/repos/huggingface/datasets/issues/1887
https://github.com/huggingface/datasets/pull/1887
1,887
Implement to_csv for Dataset
closed
5
2021-02-16T11:27:29
2021-02-19T09:41:59
2021-02-19T09:41:59
SBrandeis
[]
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
true
809,221,885
https://api.github.com/repos/huggingface/datasets/issues/1886
https://github.com/huggingface/datasets/pull/1886
1,886
Common voice
closed
4
2021-02-16T11:16:10
2021-03-09T18:51:31
2021-03-09T18:51:31
BirgerMoell
[]
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
true
808,881,501
https://api.github.com/repos/huggingface/datasets/issues/1885
https://github.com/huggingface/datasets/pull/1885
1,885
add missing info on how to add large files
closed
0
2021-02-15T23:46:39
2021-02-16T16:22:19
2021-02-16T11:44:12
stas00
[]
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
true
808,755,894
https://api.github.com/repos/huggingface/datasets/issues/1884
https://github.com/huggingface/datasets/pull/1884
1,884
dtype fix when using numpy arrays
closed
0
2021-02-15T18:55:25
2021-07-30T11:01:18
2021-07-30T11:01:18
bhavitvyamalik
[]
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
true
808,750,623
https://api.github.com/repos/huggingface/datasets/issues/1883
https://github.com/huggingface/datasets/pull/1883
1,883
Add not-in-place implementations for several dataset transforms
closed
3
2021-02-15T18:44:26
2021-02-24T14:54:49
2021-02-24T14:53:26
SBrandeis
[]
Should we deprecate in-place versions of such methods?
true
808,716,576
https://api.github.com/repos/huggingface/datasets/issues/1882
https://github.com/huggingface/datasets/pull/1882
1,882
Create Remote Manager
open
2
2021-02-15T17:36:24
2022-07-06T15:19:47
null
albertvillanova
[]
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
true
808,578,200
https://api.github.com/repos/huggingface/datasets/issues/1881
https://github.com/huggingface/datasets/pull/1881
1,881
`list_datasets()` returns a list of strings, not objects
closed
0
2021-02-15T14:20:15
2021-02-15T15:09:49
2021-02-15T15:09:48
pminervini
[]
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
true
808,563,439
https://api.github.com/repos/huggingface/datasets/issues/1880
https://github.com/huggingface/datasets/pull/1880
1,880
Update multi_woz_v22 checksums
closed
0
2021-02-15T14:00:18
2021-02-15T14:18:19
2021-02-15T14:18:18
lhoestq
[]
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
true
808,541,442
https://api.github.com/repos/huggingface/datasets/issues/1879
https://github.com/huggingface/datasets/pull/1879
1,879
Replace flatten_nested
closed
1
2021-02-15T13:29:40
2021-02-19T18:35:14
2021-02-19T18:35:14
albertvillanova
[]
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I...
true
808,526,883
https://api.github.com/repos/huggingface/datasets/issues/1878
https://github.com/huggingface/datasets/pull/1878
1,878
Add LJ Speech dataset
closed
3
2021-02-15T13:10:42
2021-02-15T19:39:41
2021-02-15T14:18:09
anton-l
[]
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by pape...
true
808,462,272
https://api.github.com/repos/huggingface/datasets/issues/1877
https://github.com/huggingface/datasets/issues/1877
1,877
Allow concatenation of both in-memory and on-disk datasets
closed
6
2021-02-15T11:39:46
2021-03-26T16:51:58
2021-03-26T16:51:58
lhoestq
[]
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickl...
false
808,025,859
https://api.github.com/repos/huggingface/datasets/issues/1876
https://github.com/huggingface/datasets/issues/1876
1,876
load_dataset("multi_woz_v22") NonMatchingChecksumError
closed
4
2021-02-14T19:14:48
2021-08-04T18:08:00
2021-08-04T18:08:00
Vincent950129
[]
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.N...
false
807,887,267
https://api.github.com/repos/huggingface/datasets/issues/1875
https://github.com/huggingface/datasets/pull/1875
1,875
Adding sari metric
closed
0
2021-02-14T04:38:35
2021-02-17T15:56:27
2021-02-17T15:56:27
ddhruvkr
[]
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
true
807,786,094
https://api.github.com/repos/huggingface/datasets/issues/1874
https://github.com/huggingface/datasets/pull/1874
1,874
Adding Europarl Bilingual dataset
closed
7
2021-02-13T17:02:04
2021-03-04T10:38:22
2021-03-04T10:38:22
lucadiliello
[]
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some ke...
true
807,750,745
https://api.github.com/repos/huggingface/datasets/issues/1873
https://github.com/huggingface/datasets/pull/1873
1,873
add iapp_wiki_qa_squad
closed
0
2021-02-13T13:34:27
2021-02-16T14:21:58
2021-02-16T14:21:58
cstorm125
[]
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/...
true
807,711,935
https://api.github.com/repos/huggingface/datasets/issues/1872
https://github.com/huggingface/datasets/issues/1872
1,872
Adding a new column to the dataset after set_format was called
closed
4
2021-02-13T09:14:35
2021-03-30T14:01:45
2021-03-30T14:01:45
villmow
[]
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"...
false
807,697,671
https://api.github.com/repos/huggingface/datasets/issues/1871
https://github.com/huggingface/datasets/pull/1871
1,871
Add newspop dataset
closed
1
2021-02-13T07:31:23
2021-03-08T10:12:45
2021-03-08T10:12:45
frankier
[]
true
807,306,564
https://api.github.com/repos/huggingface/datasets/issues/1870
https://github.com/huggingface/datasets/pull/1870
1,870
Implement Dataset add_item
closed
5
2021-02-12T15:03:46
2021-04-23T10:01:31
2021-04-23T10:01:31
albertvillanova
[ "enhancement" ]
Implement `Dataset.add_item`. Close #1854.
true
807,159,835
https://api.github.com/repos/huggingface/datasets/issues/1869
https://github.com/huggingface/datasets/pull/1869
1,869
Remove outdated commands in favor of huggingface-cli
closed
0
2021-02-12T11:28:10
2021-02-12T16:13:09
2021-02-12T16:13:08
lhoestq
[]
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
true
807,138,159
https://api.github.com/repos/huggingface/datasets/issues/1868
https://github.com/huggingface/datasets/pull/1868
1,868
Update oscar sizes
closed
0
2021-02-12T10:55:35
2021-02-12T11:03:07
2021-02-12T11:03:06
lhoestq
[]
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
true
807,127,181
https://api.github.com/repos/huggingface/datasets/issues/1867
https://github.com/huggingface/datasets/issues/1867
1,867
ERROR WHEN USING SET_TRANSFORM()
closed
8
2021-02-12T10:38:31
2021-03-01T14:04:24
2021-02-24T12:00:43
avacaondata
[]
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional arg...
false
807,017,816
https://api.github.com/repos/huggingface/datasets/issues/1866
https://github.com/huggingface/datasets/pull/1866
1,866
Add dataset for Financial PhraseBank
closed
1
2021-02-12T07:30:56
2021-02-17T14:22:36
2021-02-17T14:22:36
frankier
[]
true
806,388,290
https://api.github.com/repos/huggingface/datasets/issues/1865
https://github.com/huggingface/datasets/pull/1865
1,865
Updated OPUS Open Subtitles Dataset with metadata information
closed
2
2021-02-11T13:26:26
2021-02-19T12:38:09
2021-02-12T16:59:44
Valahaar
[]
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninst...
true