id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
815,770,012
1,940
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
closed
[]
2021-02-24T19:18:56
2021-03-23T15:26:49
2021-03-23T15:26:49
Hi there! In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end: ```python ...
francisco-perez-sorrosal
https://github.com/huggingface/datasets/issues/1940
null
false
815,680,510
1,939
[firewalled env] OFFLINE mode
closed
[]
2021-02-24T17:13:42
2021-03-05T05:09:54
2021-03-05T05:09:54
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls. I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos...
stas00
https://github.com/huggingface/datasets/issues/1939
null
false
815,647,774
1,938
Disallow ClassLabel with no names
closed
[]
2021-02-24T16:37:57
2021-02-25T11:27:29
2021-02-25T11:27:29
It was possible to create a ClassLabel without specifying the names or the number of classes. This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str. cc @justin-yan
lhoestq
https://github.com/huggingface/datasets/pull/1938
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1938", "html_url": "https://github.com/huggingface/datasets/pull/1938", "diff_url": "https://github.com/huggingface/datasets/pull/1938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1938.patch", "merged_at": "2021-02-25T11:27...
true
815,163,943
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
closed
[]
2021-02-24T06:47:33
2021-02-26T11:10:06
2021-02-26T11:10:06
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
yuchenlin
https://github.com/huggingface/datasets/issues/1937
null
false
814,726,512
1,936
[WIP] Adding Support for Reading Pandas Category
closed
[]
2021-02-23T18:32:54
2022-03-09T18:46:22
2022-03-09T18:46:22
@lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014 The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category. Just the 4 line change below actually does seem to work: ``` >>> from datasets import Data...
justin-yan
https://github.com/huggingface/datasets/pull/1936
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1936", "html_url": "https://github.com/huggingface/datasets/pull/1936", "diff_url": "https://github.com/huggingface/datasets/pull/1936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1936.patch", "merged_at": null }
true
814,623,827
1,935
add CoVoST2
closed
[]
2021-02-23T16:28:16
2021-02-24T18:09:32
2021-02-24T18:05:09
This PR adds the CoVoST2 dataset for speech translation and ASR. https://github.com/facebookresearch/covost#covost-2 The dataset requires manual download as the download page requests an email address and the URLs are temporary. The dummy data is a bit bigger because of the mp3 files and 36 configs.
patil-suraj
https://github.com/huggingface/datasets/pull/1935
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1935", "html_url": "https://github.com/huggingface/datasets/pull/1935", "diff_url": "https://github.com/huggingface/datasets/pull/1935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1935.patch", "merged_at": "2021-02-24T18:05...
true
814,437,190
1,934
Add Stanford Sentiment Treebank (SST)
closed
[]
2021-02-23T12:53:16
2021-03-18T17:51:44
2021-03-18T17:51:44
I am going to add SST: - **Name:** The Stanford Sentiment Treebank - **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank...
patpizio
https://github.com/huggingface/datasets/issues/1934
null
false
814,335,846
1,933
Use arrow ipc file format
closed
[]
2021-02-23T10:38:24
2023-10-30T16:20:19
2023-09-25T09:20:38
According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample: > We define a “file format” supporting random access that is build with the stream format. The file ...
lhoestq
https://github.com/huggingface/datasets/pull/1933
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1933", "html_url": "https://github.com/huggingface/datasets/pull/1933", "diff_url": "https://github.com/huggingface/datasets/pull/1933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1933.patch", "merged_at": null }
true
814,326,116
1,932
Fix builder config creation with data_dir
closed
[]
2021-02-23T10:26:02
2021-02-23T10:45:28
2021-02-23T10:45:27
The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custo...
lhoestq
https://github.com/huggingface/datasets/pull/1932
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1932", "html_url": "https://github.com/huggingface/datasets/pull/1932", "diff_url": "https://github.com/huggingface/datasets/pull/1932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1932.patch", "merged_at": "2021-02-23T10:45...
true
814,225,074
1,931
add m_lama (multilingual lama) dataset
closed
[]
2021-02-23T08:11:57
2021-03-01T10:01:03
2021-03-01T10:01:03
Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf
pdufter
https://github.com/huggingface/datasets/pull/1931
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1931", "html_url": "https://github.com/huggingface/datasets/pull/1931", "diff_url": "https://github.com/huggingface/datasets/pull/1931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1931.patch", "merged_at": "2021-03-01T10:01...
true
814,055,198
1,930
updated the wino_bias dataset
closed
[]
2021-02-23T03:07:40
2021-04-07T15:24:56
2021-04-07T15:24:56
Updated the wino_bias.py script. - updated the data_url - added different configurations for different data splits - added the coreference_cluster to the data features
JieyuZhao
https://github.com/huggingface/datasets/pull/1930
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1930", "html_url": "https://github.com/huggingface/datasets/pull/1930", "diff_url": "https://github.com/huggingface/datasets/pull/1930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1930.patch", "merged_at": "2021-04-07T15:24...
true
813,929,669
1,929
Improve typing and style and fix some inconsistencies
closed
[]
2021-02-22T22:47:41
2021-02-24T16:16:14
2021-02-24T14:03:54
This PR: * improves typing (mostly more consistent use of `typing.Optional`) * `DatasetDict.cleanup_cache_files` now correctly returns a dict * replaces `dict()` with the corresponding literal * uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying
mariosasko
https://github.com/huggingface/datasets/pull/1929
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1929", "html_url": "https://github.com/huggingface/datasets/pull/1929", "diff_url": "https://github.com/huggingface/datasets/pull/1929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1929.patch", "merged_at": "2021-02-24T14:03...
true
813,793,434
1,928
Updating old cards
closed
[]
2021-02-22T19:26:04
2021-02-23T18:19:25
2021-02-23T18:19:25
Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli)...
mcmillanmajora
https://github.com/huggingface/datasets/pull/1928
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1928", "html_url": "https://github.com/huggingface/datasets/pull/1928", "diff_url": "https://github.com/huggingface/datasets/pull/1928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1928.patch", "merged_at": "2021-02-23T18:19...
true
813,768,935
1,927
Update dataset card of wino_bias
closed
[]
2021-02-22T18:51:34
2022-09-23T13:35:09
2022-09-23T13:35:08
Updated the info for the wino_bias dataset.
JieyuZhao
https://github.com/huggingface/datasets/pull/1927
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1927", "html_url": "https://github.com/huggingface/datasets/pull/1927", "diff_url": "https://github.com/huggingface/datasets/pull/1927.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1927.patch", "merged_at": null }
true
813,607,994
1,926
Fix: Wiki_dpr - add missing scalar quantizer
closed
[]
2021-02-22T15:32:05
2021-02-22T15:49:54
2021-02-22T15:49:53
All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done. The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG. The quantizer reduces the size of the index a lot but increases index b...
lhoestq
https://github.com/huggingface/datasets/pull/1926
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1926", "html_url": "https://github.com/huggingface/datasets/pull/1926", "diff_url": "https://github.com/huggingface/datasets/pull/1926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1926.patch", "merged_at": "2021-02-22T15:49...
true
813,600,902
1,925
Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index"
closed
[]
2021-02-22T15:23:46
2021-02-25T01:33:48
2021-02-22T15:36:08
Fix the bugs noticed in #1915 There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`). Another issue was that s...
lhoestq
https://github.com/huggingface/datasets/pull/1925
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1925", "html_url": "https://github.com/huggingface/datasets/pull/1925", "diff_url": "https://github.com/huggingface/datasets/pull/1925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1925.patch", "merged_at": "2021-02-22T15:36...
true
813,599,733
1,924
Anonymous Dataset Addition (i.e Anonymous PR?)
closed
[]
2021-02-22T15:22:30
2022-10-05T13:07:11
2022-10-05T13:07:11
Hello, Thanks a lot for your librairy. We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? Cheers @eusip
PierreColombo
https://github.com/huggingface/datasets/issues/1924
null
false
813,363,472
1,923
Fix save_to_disk with relative path
closed
[]
2021-02-22T10:27:19
2021-02-22T11:22:44
2021-02-22T11:22:43
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added...
lhoestq
https://github.com/huggingface/datasets/pull/1923
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1923", "html_url": "https://github.com/huggingface/datasets/pull/1923", "diff_url": "https://github.com/huggingface/datasets/pull/1923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1923.patch", "merged_at": "2021-02-22T11:22...
true
813,140,806
1,922
How to update the "wino_bias" dataset
open
[]
2021-02-22T05:39:39
2021-02-22T10:35:59
null
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
JieyuZhao
https://github.com/huggingface/datasets/issues/1922
null
false
812,716,042
1,921
Standardizing datasets dtypes
closed
[]
2021-02-20T22:04:01
2021-02-22T09:44:10
2021-02-22T09:44:10
This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets. This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since ...
justin-yan
https://github.com/huggingface/datasets/pull/1921
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1921", "html_url": "https://github.com/huggingface/datasets/pull/1921", "diff_url": "https://github.com/huggingface/datasets/pull/1921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1921.patch", "merged_at": "2021-02-22T09:44...
true
812,628,220
1,920
Fix save_to_disk issue
closed
[]
2021-02-20T14:22:39
2021-02-22T10:30:11
2021-02-22T10:30:11
Fixes #1919
M-Salti
https://github.com/huggingface/datasets/pull/1920
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1920", "html_url": "https://github.com/huggingface/datasets/pull/1920", "diff_url": "https://github.com/huggingface/datasets/pull/1920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1920.patch", "merged_at": null }
true
812,626,872
1,919
Failure to save with save_to_disk
closed
[]
2021-02-20T14:18:10
2021-03-03T17:40:27
2021-03-03T17:40:27
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load...
M-Salti
https://github.com/huggingface/datasets/issues/1919
null
false
812,541,510
1,918
Fix QA4MRE download URLs
closed
[]
2021-02-20T07:32:17
2021-02-22T13:35:06
2021-02-22T13:35:06
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
M-Salti
https://github.com/huggingface/datasets/pull/1918
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1918", "html_url": "https://github.com/huggingface/datasets/pull/1918", "diff_url": "https://github.com/huggingface/datasets/pull/1918.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1918.patch", "merged_at": "2021-02-22T13:35...
true
812,390,178
1,917
UnicodeDecodeError: windows 10 machine
closed
[]
2021-02-19T22:13:05
2021-02-19T22:41:11
2021-02-19T22:40:28
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.er...
yosiasz
https://github.com/huggingface/datasets/issues/1917
null
false
812,291,984
1,916
Remove unused py_utils objects
closed
[]
2021-02-19T19:51:25
2021-02-22T14:56:56
2021-02-22T13:32:49
Remove unused/unnecessary py_utils functions/classes.
albertvillanova
https://github.com/huggingface/datasets/pull/1916
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1916", "html_url": "https://github.com/huggingface/datasets/pull/1916", "diff_url": "https://github.com/huggingface/datasets/pull/1916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1916.patch", "merged_at": "2021-02-22T13:32...
true
812,229,654
1,915
Unable to download `wiki_dpr`
closed
[]
2021-02-19T18:11:32
2021-03-03T17:40:48
2021-03-03T17:40:48
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.i...
nitarakad
https://github.com/huggingface/datasets/issues/1915
null
false
812,149,201
1,914
Fix logging imports and make all datasets use library logger
closed
[]
2021-02-19T16:12:34
2021-02-21T19:48:03
2021-02-21T19:48:03
Fix library relative logging imports and make all datasets use library logger.
albertvillanova
https://github.com/huggingface/datasets/pull/1914
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1914", "html_url": "https://github.com/huggingface/datasets/pull/1914", "diff_url": "https://github.com/huggingface/datasets/pull/1914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1914.patch", "merged_at": "2021-02-21T19:48...
true
812,127,307
1,913
Add keep_linebreaks parameter to text loader
closed
[]
2021-02-19T15:43:45
2021-02-19T18:36:12
2021-02-19T18:36:11
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
lhoestq
https://github.com/huggingface/datasets/pull/1913
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1913", "html_url": "https://github.com/huggingface/datasets/pull/1913", "diff_url": "https://github.com/huggingface/datasets/pull/1913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1913.patch", "merged_at": "2021-02-19T18:36...
true
812,034,140
1,912
Update: WMT - use mirror links
closed
[]
2021-02-19T13:42:34
2021-02-24T13:44:53
2021-02-24T13:44:53
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
lhoestq
https://github.com/huggingface/datasets/pull/1912
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1912", "html_url": "https://github.com/huggingface/datasets/pull/1912", "diff_url": "https://github.com/huggingface/datasets/pull/1912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1912.patch", "merged_at": "2021-02-24T13:44...
true
812,009,956
1,911
Saving processed dataset running infinitely
open
[]
2021-02-19T13:09:19
2021-02-23T07:34:44
null
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table func...
ayubSubhaniya
https://github.com/huggingface/datasets/issues/1911
null
false
811,697,108
1,910
Adding CoNLLpp dataset.
closed
[]
2021-02-19T05:12:30
2021-03-04T22:02:47
2021-03-04T22:02:47
ZihanWangKi
https://github.com/huggingface/datasets/pull/1910
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1910", "html_url": "https://github.com/huggingface/datasets/pull/1910", "diff_url": "https://github.com/huggingface/datasets/pull/1910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1910.patch", "merged_at": null }
true
811,520,569
1,907
DBPedia14 Dataset Checksum bug?
closed
[]
2021-02-18T22:25:48
2021-02-22T23:22:05
2021-02-22T23:22:04
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, i...
francisco-perez-sorrosal
https://github.com/huggingface/datasets/issues/1907
null
false
811,405,274
1,906
Feature Request: Support for Pandas `Categorical`
open
[]
2021-02-18T19:46:05
2021-02-23T14:38:50
null
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_...
justin-yan
https://github.com/huggingface/datasets/issues/1906
null
false
811,384,174
1,905
Standardizing datasets.dtypes
closed
[]
2021-02-18T19:15:31
2021-02-20T22:01:30
2021-02-20T22:01:30
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This...
justin-yan
https://github.com/huggingface/datasets/pull/1905
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1905", "html_url": "https://github.com/huggingface/datasets/pull/1905", "diff_url": "https://github.com/huggingface/datasets/pull/1905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1905.patch", "merged_at": null }
true
811,260,904
1,904
Fix to_pandas for boolean ArrayXD
closed
[]
2021-02-18T16:30:46
2021-02-18T17:10:03
2021-02-18T17:10:01
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pya...
lhoestq
https://github.com/huggingface/datasets/pull/1904
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1904", "html_url": "https://github.com/huggingface/datasets/pull/1904", "diff_url": "https://github.com/huggingface/datasets/pull/1904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1904.patch", "merged_at": "2021-02-18T17:10...
true
811,145,531
1,903
Initial commit for the addition of TIMIT dataset
closed
[]
2021-02-18T14:23:12
2021-03-01T09:39:12
2021-03-01T09:39:12
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten ...
vrindaprabhu
https://github.com/huggingface/datasets/pull/1903
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1903", "html_url": "https://github.com/huggingface/datasets/pull/1903", "diff_url": "https://github.com/huggingface/datasets/pull/1903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1903.patch", "merged_at": "2021-03-01T09:39...
true
810,931,171
1,902
Fix setimes_2 wmt urls
closed
[]
2021-02-18T09:42:26
2021-02-18T09:55:41
2021-02-18T09:55:41
Continuation of #1901 Some other urls were missing https
lhoestq
https://github.com/huggingface/datasets/pull/1902
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1902", "html_url": "https://github.com/huggingface/datasets/pull/1902", "diff_url": "https://github.com/huggingface/datasets/pull/1902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1902.patch", "merged_at": "2021-02-18T09:55...
true
810,845,605
1,901
Fix OPUS dataset download errors
closed
[]
2021-02-18T07:39:41
2021-02-18T15:07:20
2021-02-18T09:39:21
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
YangWang92
https://github.com/huggingface/datasets/pull/1901
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901", "html_url": "https://github.com/huggingface/datasets/pull/1901", "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "merged_at": "2021-02-18T09:39...
true
810,512,488
1,900
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
closed
[]
2021-02-17T20:26:04
2021-02-19T18:27:11
2021-02-19T18:27:11
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float t...
justin-yan
https://github.com/huggingface/datasets/pull/1900
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1900", "html_url": "https://github.com/huggingface/datasets/pull/1900", "diff_url": "https://github.com/huggingface/datasets/pull/1900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1900.patch", "merged_at": "2021-02-19T18:27...
true
810,308,332
1,899
Fix: ALT - fix duplicated examples in alt-parallel
closed
[]
2021-02-17T15:53:56
2021-02-17T17:20:49
2021-02-17T17:20:49
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
lhoestq
https://github.com/huggingface/datasets/pull/1899
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899", "html_url": "https://github.com/huggingface/datasets/pull/1899", "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "merged_at": "2021-02-17T17:20...
true
810,157,251
1,898
ALT dataset has repeating instances in all splits
closed
[]
2021-02-17T12:51:42
2021-02-19T06:18:46
2021-02-19T06:18:46
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `exp...
10-zin
https://github.com/huggingface/datasets/issues/1898
null
false
810,113,263
1,897
Fix PandasArrayExtensionArray conversion to native type
closed
[]
2021-02-17T11:48:24
2021-02-17T13:15:16
2021-02-17T13:15:15
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna metho...
lhoestq
https://github.com/huggingface/datasets/pull/1897
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897", "html_url": "https://github.com/huggingface/datasets/pull/1897", "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "merged_at": "2021-02-17T13:15...
true
809,630,271
1,895
Bug Report: timestamp[ns] not recognized
closed
[]
2021-02-16T20:38:04
2021-02-19T18:27:11
2021-02-19T18:27:11
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The fact...
justin-yan
https://github.com/huggingface/datasets/issues/1895
null
false
809,609,654
1,894
benchmarking against MMapIndexedDataset
open
[]
2021-02-16T20:04:58
2021-02-17T18:52:28
null
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o...
sshleifer
https://github.com/huggingface/datasets/issues/1894
null
false
809,556,503
1,893
wmt19 is broken
closed
[]
2021-02-16T18:39:58
2021-03-03T17:42:02
2021-03-03T17:42:02
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent c...
stas00
https://github.com/huggingface/datasets/issues/1893
null
false
809,554,174
1,892
request to mirror wmt datasets, as they are really slow to download
closed
[]
2021-02-16T18:36:11
2021-10-26T06:55:42
2021-03-25T11:53:23
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
stas00
https://github.com/huggingface/datasets/issues/1892
null
false
809,550,001
1,891
suggestion to improve a missing dataset error
closed
[]
2021-02-16T18:29:13
2022-10-05T12:48:38
2022-10-05T12:48:38
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): ...
stas00
https://github.com/huggingface/datasets/issues/1891
null
false
809,395,586
1,890
Reformat dataset cards section titles
closed
[]
2021-02-16T15:11:47
2021-02-16T15:12:34
2021-02-16T15:12:33
Titles are formatted like [Foo](#foo) instead of just Foo
lhoestq
https://github.com/huggingface/datasets/pull/1890
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1890", "html_url": "https://github.com/huggingface/datasets/pull/1890", "diff_url": "https://github.com/huggingface/datasets/pull/1890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1890.patch", "merged_at": "2021-02-16T15:12...
true
809,276,015
1,889
Implement to_dict and to_pandas for Dataset
closed
[]
2021-02-16T12:38:19
2021-02-18T18:42:37
2021-02-18T18:42:34
With options to return a generator or the full dataset
SBrandeis
https://github.com/huggingface/datasets/pull/1889
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1889", "html_url": "https://github.com/huggingface/datasets/pull/1889", "diff_url": "https://github.com/huggingface/datasets/pull/1889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1889.patch", "merged_at": "2021-02-18T18:42...
true
809,241,123
1,888
Docs for adding new column on formatted dataset
closed
[]
2021-02-16T11:45:00
2021-03-30T14:01:03
2021-02-16T11:58:57
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
lhoestq
https://github.com/huggingface/datasets/pull/1888
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888", "html_url": "https://github.com/huggingface/datasets/pull/1888", "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "merged_at": "2021-02-16T11:58...
true
809,229,809
1,887
Implement to_csv for Dataset
closed
[]
2021-02-16T11:27:29
2021-02-19T09:41:59
2021-02-19T09:41:59
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
SBrandeis
https://github.com/huggingface/datasets/pull/1887
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887", "html_url": "https://github.com/huggingface/datasets/pull/1887", "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "merged_at": "2021-02-19T09:41...
true
809,221,885
1,886
Common voice
closed
[]
2021-02-16T11:16:10
2021-03-09T18:51:31
2021-03-09T18:51:31
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
BirgerMoell
https://github.com/huggingface/datasets/pull/1886
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1886", "html_url": "https://github.com/huggingface/datasets/pull/1886", "diff_url": "https://github.com/huggingface/datasets/pull/1886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1886.patch", "merged_at": "2021-03-09T18:51...
true
808,881,501
1,885
add missing info on how to add large files
closed
[]
2021-02-15T23:46:39
2021-02-16T16:22:19
2021-02-16T11:44:12
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
stas00
https://github.com/huggingface/datasets/pull/1885
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1885", "html_url": "https://github.com/huggingface/datasets/pull/1885", "diff_url": "https://github.com/huggingface/datasets/pull/1885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1885.patch", "merged_at": "2021-02-16T11:44...
true
808,755,894
1,884
dtype fix when using numpy arrays
closed
[]
2021-02-15T18:55:25
2021-07-30T11:01:18
2021-07-30T11:01:18
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
bhavitvyamalik
https://github.com/huggingface/datasets/pull/1884
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1884", "html_url": "https://github.com/huggingface/datasets/pull/1884", "diff_url": "https://github.com/huggingface/datasets/pull/1884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1884.patch", "merged_at": null }
true
808,750,623
1,883
Add not-in-place implementations for several dataset transforms
closed
[]
2021-02-15T18:44:26
2021-02-24T14:54:49
2021-02-24T14:53:26
Should we deprecate in-place versions of such methods?
SBrandeis
https://github.com/huggingface/datasets/pull/1883
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883", "html_url": "https://github.com/huggingface/datasets/pull/1883", "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "merged_at": "2021-02-24T14:53...
true
808,716,576
1,882
Create Remote Manager
open
[]
2021-02-15T17:36:24
2022-07-06T15:19:47
null
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
albertvillanova
https://github.com/huggingface/datasets/pull/1882
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1882", "html_url": "https://github.com/huggingface/datasets/pull/1882", "diff_url": "https://github.com/huggingface/datasets/pull/1882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1882.patch", "merged_at": null }
true
808,578,200
1,881
`list_datasets()` returns a list of strings, not objects
closed
[]
2021-02-15T14:20:15
2021-02-15T15:09:49
2021-02-15T15:09:48
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
pminervini
https://github.com/huggingface/datasets/pull/1881
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1881", "html_url": "https://github.com/huggingface/datasets/pull/1881", "diff_url": "https://github.com/huggingface/datasets/pull/1881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1881.patch", "merged_at": "2021-02-15T15:09...
true
808,563,439
1,880
Update multi_woz_v22 checksums
closed
[]
2021-02-15T14:00:18
2021-02-15T14:18:19
2021-02-15T14:18:18
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
lhoestq
https://github.com/huggingface/datasets/pull/1880
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1880", "html_url": "https://github.com/huggingface/datasets/pull/1880", "diff_url": "https://github.com/huggingface/datasets/pull/1880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1880.patch", "merged_at": "2021-02-15T14:18...
true
808,541,442
1,879
Replace flatten_nested
closed
[]
2021-02-15T13:29:40
2021-02-19T18:35:14
2021-02-19T18:35:14
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I...
albertvillanova
https://github.com/huggingface/datasets/pull/1879
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1879", "html_url": "https://github.com/huggingface/datasets/pull/1879", "diff_url": "https://github.com/huggingface/datasets/pull/1879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1879.patch", "merged_at": "2021-02-19T18:35...
true
808,526,883
1,878
Add LJ Speech dataset
closed
[]
2021-02-15T13:10:42
2021-02-15T19:39:41
2021-02-15T14:18:09
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by pape...
anton-l
https://github.com/huggingface/datasets/pull/1878
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1878", "html_url": "https://github.com/huggingface/datasets/pull/1878", "diff_url": "https://github.com/huggingface/datasets/pull/1878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1878.patch", "merged_at": "2021-02-15T14:18...
true
808,462,272
1,877
Allow concatenation of both in-memory and on-disk datasets
closed
[]
2021-02-15T11:39:46
2021-03-26T16:51:58
2021-03-26T16:51:58
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickl...
lhoestq
https://github.com/huggingface/datasets/issues/1877
null
false
808,025,859
1,876
load_dataset("multi_woz_v22") NonMatchingChecksumError
closed
[]
2021-02-14T19:14:48
2021-08-04T18:08:00
2021-08-04T18:08:00
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.N...
Vincent950129
https://github.com/huggingface/datasets/issues/1876
null
false
807,887,267
1,875
Adding sari metric
closed
[]
2021-02-14T04:38:35
2021-02-17T15:56:27
2021-02-17T15:56:27
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
ddhruvkr
https://github.com/huggingface/datasets/pull/1875
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1875", "html_url": "https://github.com/huggingface/datasets/pull/1875", "diff_url": "https://github.com/huggingface/datasets/pull/1875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1875.patch", "merged_at": "2021-02-17T15:56...
true
807,786,094
1,874
Adding Europarl Bilingual dataset
closed
[]
2021-02-13T17:02:04
2021-03-04T10:38:22
2021-03-04T10:38:22
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some ke...
lucadiliello
https://github.com/huggingface/datasets/pull/1874
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874", "html_url": "https://github.com/huggingface/datasets/pull/1874", "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "merged_at": "2021-03-04T10:38...
true
807,750,745
1,873
add iapp_wiki_qa_squad
closed
[]
2021-02-13T13:34:27
2021-02-16T14:21:58
2021-02-16T14:21:58
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/...
cstorm125
https://github.com/huggingface/datasets/pull/1873
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873", "html_url": "https://github.com/huggingface/datasets/pull/1873", "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "merged_at": "2021-02-16T14:21...
true
807,711,935
1,872
Adding a new column to the dataset after set_format was called
closed
[]
2021-02-13T09:14:35
2021-03-30T14:01:45
2021-03-30T14:01:45
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"...
villmow
https://github.com/huggingface/datasets/issues/1872
null
false
807,697,671
1,871
Add newspop dataset
closed
[]
2021-02-13T07:31:23
2021-03-08T10:12:45
2021-03-08T10:12:45
frankier
https://github.com/huggingface/datasets/pull/1871
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871", "html_url": "https://github.com/huggingface/datasets/pull/1871", "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "merged_at": "2021-03-08T10:12...
true
807,306,564
1,870
Implement Dataset add_item
closed
[]
2021-02-12T15:03:46
2021-04-23T10:01:31
2021-04-23T10:01:31
Implement `Dataset.add_item`. Close #1854.
albertvillanova
https://github.com/huggingface/datasets/pull/1870
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870", "html_url": "https://github.com/huggingface/datasets/pull/1870", "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "merged_at": "2021-04-23T10:01...
true
807,159,835
1,869
Remove outdated commands in favor of huggingface-cli
closed
[]
2021-02-12T11:28:10
2021-02-12T16:13:09
2021-02-12T16:13:08
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
lhoestq
https://github.com/huggingface/datasets/pull/1869
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869", "html_url": "https://github.com/huggingface/datasets/pull/1869", "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "merged_at": "2021-02-12T16:13...
true
807,138,159
1,868
Update oscar sizes
closed
[]
2021-02-12T10:55:35
2021-02-12T11:03:07
2021-02-12T11:03:06
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
lhoestq
https://github.com/huggingface/datasets/pull/1868
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868", "html_url": "https://github.com/huggingface/datasets/pull/1868", "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "merged_at": "2021-02-12T11:03...
true
807,127,181
1,867
ERROR WHEN USING SET_TRANSFORM()
closed
[]
2021-02-12T10:38:31
2021-03-01T14:04:24
2021-02-24T12:00:43
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional arg...
avacaondata
https://github.com/huggingface/datasets/issues/1867
null
false
807,017,816
1,866
Add dataset for Financial PhraseBank
closed
[]
2021-02-12T07:30:56
2021-02-17T14:22:36
2021-02-17T14:22:36
frankier
https://github.com/huggingface/datasets/pull/1866
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866", "html_url": "https://github.com/huggingface/datasets/pull/1866", "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "merged_at": "2021-02-17T14:22...
true
806,388,290
1,865
Updated OPUS Open Subtitles Dataset with metadata information
closed
[]
2021-02-11T13:26:26
2021-02-19T12:38:09
2021-02-12T16:59:44
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninst...
Valahaar
https://github.com/huggingface/datasets/pull/1865
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865", "html_url": "https://github.com/huggingface/datasets/pull/1865", "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "merged_at": "2021-02-12T16:59...
true
806,172,843
1,864
Add Winogender Schemas
closed
[]
2021-02-11T08:18:38
2021-02-11T08:19:51
2021-02-11T08:19:51
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper...
NielsRogge
https://github.com/huggingface/datasets/issues/1864
null
false
806,171,311
1,863
Add WikiCREM
open
[]
2021-02-11T08:16:00
2021-03-07T07:27:13
null
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **...
NielsRogge
https://github.com/huggingface/datasets/issues/1863
null
false
805,722,293
1,862
Fix writing GPU Faiss index
closed
[]
2021-02-10T17:32:03
2021-02-10T18:17:48
2021-02-10T18:17:47
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
lhoestq
https://github.com/huggingface/datasets/pull/1862
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862", "html_url": "https://github.com/huggingface/datasets/pull/1862", "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "merged_at": "2021-02-10T18:17...
true
805,631,215
1,861
Fix Limit url
closed
[]
2021-02-10T15:44:56
2021-02-10T16:15:00
2021-02-10T16:14:59
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
lhoestq
https://github.com/huggingface/datasets/pull/1861
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861", "html_url": "https://github.com/huggingface/datasets/pull/1861", "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "merged_at": "2021-02-10T16:14...
true
805,510,037
1,860
Add loading from the Datasets Hub + add relative paths in download manager
closed
[]
2021-02-10T13:24:11
2021-02-12T19:13:30
2021-02-12T19:13:29
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_data...
lhoestq
https://github.com/huggingface/datasets/pull/1860
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860", "html_url": "https://github.com/huggingface/datasets/pull/1860", "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "merged_at": "2021-02-12T19:13...
true
805,479,025
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
closed
[]
2021-02-10T12:41:00
2021-02-10T18:32:12
2021-02-10T18:17:47
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_availabl...
corticalstack
https://github.com/huggingface/datasets/issues/1859
null
false
805,477,774
1,858
Clean config getenvs
closed
[]
2021-02-10T12:39:14
2021-02-10T15:52:30
2021-02-10T15:52:29
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
lhoestq
https://github.com/huggingface/datasets/pull/1858
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858", "html_url": "https://github.com/huggingface/datasets/pull/1858", "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "merged_at": "2021-02-10T15:52...
true
805,391,107
1,857
Unable to upload "community provided" dataset - 400 Client Error
closed
[]
2021-02-10T10:39:01
2021-08-03T05:06:13
2021-08-03T05:06:13
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3...
mwrzalik
https://github.com/huggingface/datasets/issues/1857
null
false
805,360,200
1,856
load_dataset("amazon_polarity") NonMatchingChecksumError
closed
[]
2021-02-10T10:00:56
2022-03-15T13:55:24
2022-03-15T13:55:23
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
yanxi0830
https://github.com/huggingface/datasets/issues/1856
null
false
805,256,579
1,855
Minor fix in the docs
closed
[]
2021-02-10T07:27:43
2021-02-10T12:33:09
2021-02-10T12:33:09
albertvillanova
https://github.com/huggingface/datasets/pull/1855
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855", "html_url": "https://github.com/huggingface/datasets/pull/1855", "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "merged_at": "2021-02-10T12:33...
true
805,204,397
1,854
Feature Request: Dataset.add_item
closed
[]
2021-02-10T06:06:00
2021-04-23T10:01:30
2021-04-23T10:01:30
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m...
sshleifer
https://github.com/huggingface/datasets/issues/1854
null
false
804,791,166
1,853
Configure library root logger at the module level
closed
[]
2021-02-09T18:11:12
2021-02-10T12:32:34
2021-02-10T12:32:34
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
albertvillanova
https://github.com/huggingface/datasets/pull/1853
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853", "html_url": "https://github.com/huggingface/datasets/pull/1853", "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "merged_at": "2021-02-10T12:32...
true
804,633,033
1,852
Add Arabic Speech Corpus
closed
[]
2021-02-09T15:02:26
2021-02-11T10:18:55
2021-02-11T10:18:55
zaidalyafeai
https://github.com/huggingface/datasets/pull/1852
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852", "html_url": "https://github.com/huggingface/datasets/pull/1852", "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "merged_at": "2021-02-11T10:18...
true
804,523,174
1,851
set bert_score version dependency
closed
[]
2021-02-09T12:51:07
2021-02-09T14:21:48
2021-02-09T14:21:48
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
pvl
https://github.com/huggingface/datasets/pull/1851
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851", "html_url": "https://github.com/huggingface/datasets/pull/1851", "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "merged_at": "2021-02-09T14:21...
true
804,412,249
1,850
Add cord 19 dataset
closed
[]
2021-02-09T10:22:08
2021-02-09T15:16:26
2021-02-09T15:16:26
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIG...
ggdupont
https://github.com/huggingface/datasets/pull/1850
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850", "html_url": "https://github.com/huggingface/datasets/pull/1850", "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "merged_at": "2021-02-09T15:16...
true
804,292,971
1,849
Add TIMIT
closed
[]
2021-02-09T07:29:41
2021-03-15T05:59:37
2021-03-15T05:59:37
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1849
null
false
803,826,506
1,848
Refactoring: Create config module
closed
[]
2021-02-08T18:43:51
2021-02-10T12:29:35
2021-02-10T12:29:35
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
albertvillanova
https://github.com/huggingface/datasets/pull/1848
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848", "html_url": "https://github.com/huggingface/datasets/pull/1848", "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "merged_at": "2021-02-10T12:29...
true
803,824,694
1,847
[Metrics] Add word error metric metric
closed
[]
2021-02-08T18:41:15
2021-02-09T17:53:21
2021-02-09T17:53:21
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
patrickvonplaten
https://github.com/huggingface/datasets/pull/1847
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847", "html_url": "https://github.com/huggingface/datasets/pull/1847", "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "merged_at": "2021-02-09T17:53...
true
803,806,380
1,846
Make DownloadManager downloaded/extracted paths accessible
closed
[]
2021-02-08T18:14:42
2021-02-25T14:10:18
2021-02-25T14:10:18
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
albertvillanova
https://github.com/huggingface/datasets/pull/1846
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846", "html_url": "https://github.com/huggingface/datasets/pull/1846", "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "merged_at": "2021-02-25T14:10...
true
803,714,493
1,845
Enable logging propagation and remove logging handler
closed
[]
2021-02-08T16:22:13
2021-02-09T14:22:38
2021-02-09T14:22:37
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also re...
lhoestq
https://github.com/huggingface/datasets/pull/1845
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845", "html_url": "https://github.com/huggingface/datasets/pull/1845", "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "merged_at": "2021-02-09T14:22...
true
803,588,125
1,844
Update Open Subtitles corpus with original sentence IDs
closed
[]
2021-02-08T13:55:13
2021-02-12T17:38:58
2021-02-12T17:38:58
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
Valahaar
https://github.com/huggingface/datasets/issues/1844
null
false
803,565,393
1,843
MustC Speech Translation
open
[]
2021-02-08T13:27:45
2025-08-25T09:01:54
null
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1843
null
false
803,563,149
1,842
Add AMI Corpus
closed
[]
2021-02-08T13:25:00
2023-02-28T16:29:22
2023-02-28T16:29:22
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1842
null
false
803,561,123
1,841
Add ljspeech
closed
[]
2021-02-08T13:22:26
2021-03-15T05:59:02
2021-03-15T05:59:02
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1841
null
false
803,560,039
1,840
Add common voice
closed
[]
2021-02-08T13:21:05
2022-03-20T15:23:40
2021-03-15T05:56:21
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/dat...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1840
null
false
803,559,164
1,839
Add Voxforge
open
[]
2021-02-08T13:19:56
2021-02-08T13:28:31
null
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1839
null
false
803,557,521
1,838
Add tedlium
closed
[]
2021-02-08T13:17:52
2022-10-04T14:34:12
2022-10-04T14:34:12
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1838
null
false