id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
806,172,843
https://api.github.com/repos/huggingface/datasets/issues/1864
https://github.com/huggingface/datasets/issues/1864
1,864
Add Winogender Schemas
closed
1
2021-02-11T08:18:38
2021-02-11T08:19:51
2021-02-11T08:19:51
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper...
false
806,171,311
https://api.github.com/repos/huggingface/datasets/issues/1863
https://github.com/huggingface/datasets/issues/1863
1,863
Add WikiCREM
open
2
2021-02-11T08:16:00
2021-03-07T07:27:13
null
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **...
false
805,722,293
https://api.github.com/repos/huggingface/datasets/issues/1862
https://github.com/huggingface/datasets/pull/1862
1,862
Fix writing GPU Faiss index
closed
0
2021-02-10T17:32:03
2021-02-10T18:17:48
2021-02-10T18:17:47
lhoestq
[]
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
true
805,631,215
https://api.github.com/repos/huggingface/datasets/issues/1861
https://github.com/huggingface/datasets/pull/1861
1,861
Fix Limit url
closed
0
2021-02-10T15:44:56
2021-02-10T16:15:00
2021-02-10T16:14:59
lhoestq
[]
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
true
805,510,037
https://api.github.com/repos/huggingface/datasets/issues/1860
https://github.com/huggingface/datasets/pull/1860
1,860
Add loading from the Datasets Hub + add relative paths in download manager
closed
2
2021-02-10T13:24:11
2021-02-12T19:13:30
2021-02-12T19:13:29
lhoestq
[]
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_data...
true
805,479,025
https://api.github.com/repos/huggingface/datasets/issues/1859
https://github.com/huggingface/datasets/issues/1859
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
closed
3
2021-02-10T12:41:00
2021-02-10T18:32:12
2021-02-10T18:17:47
corticalstack
[]
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_availabl...
false
805,477,774
https://api.github.com/repos/huggingface/datasets/issues/1858
https://github.com/huggingface/datasets/pull/1858
1,858
Clean config getenvs
closed
0
2021-02-10T12:39:14
2021-02-10T15:52:30
2021-02-10T15:52:29
lhoestq
[]
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
true
805,391,107
https://api.github.com/repos/huggingface/datasets/issues/1857
https://github.com/huggingface/datasets/issues/1857
1,857
Unable to upload "community provided" dataset - 400 Client Error
closed
1
2021-02-10T10:39:01
2021-08-03T05:06:13
2021-08-03T05:06:13
mwrzalik
[]
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3...
false
805,360,200
https://api.github.com/repos/huggingface/datasets/issues/1856
https://github.com/huggingface/datasets/issues/1856
1,856
load_dataset("amazon_polarity") NonMatchingChecksumError
closed
12
2021-02-10T10:00:56
2022-03-15T13:55:24
2022-03-15T13:55:23
yanxi0830
[]
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
false
805,256,579
https://api.github.com/repos/huggingface/datasets/issues/1855
https://github.com/huggingface/datasets/pull/1855
1,855
Minor fix in the docs
closed
0
2021-02-10T07:27:43
2021-02-10T12:33:09
2021-02-10T12:33:09
albertvillanova
[]
true
805,204,397
https://api.github.com/repos/huggingface/datasets/issues/1854
https://github.com/huggingface/datasets/issues/1854
1,854
Feature Request: Dataset.add_item
closed
3
2021-02-10T06:06:00
2021-04-23T10:01:30
2021-04-23T10:01:30
sshleifer
[ "enhancement" ]
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m...
false
804,791,166
https://api.github.com/repos/huggingface/datasets/issues/1853
https://github.com/huggingface/datasets/pull/1853
1,853
Configure library root logger at the module level
closed
0
2021-02-09T18:11:12
2021-02-10T12:32:34
2021-02-10T12:32:34
albertvillanova
[]
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
true
804,633,033
https://api.github.com/repos/huggingface/datasets/issues/1852
https://github.com/huggingface/datasets/pull/1852
1,852
Add Arabic Speech Corpus
closed
0
2021-02-09T15:02:26
2021-02-11T10:18:55
2021-02-11T10:18:55
zaidalyafeai
[]
true
804,523,174
https://api.github.com/repos/huggingface/datasets/issues/1851
https://github.com/huggingface/datasets/pull/1851
1,851
set bert_score version dependency
closed
0
2021-02-09T12:51:07
2021-02-09T14:21:48
2021-02-09T14:21:48
pvl
[]
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
true
804,412,249
https://api.github.com/repos/huggingface/datasets/issues/1850
https://github.com/huggingface/datasets/pull/1850
1,850
Add cord 19 dataset
closed
4
2021-02-09T10:22:08
2021-02-09T15:16:26
2021-02-09T15:16:26
ggdupont
[]
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIG...
true
804,292,971
https://api.github.com/repos/huggingface/datasets/issues/1849
https://github.com/huggingface/datasets/issues/1849
1,849
Add TIMIT
closed
3
2021-02-09T07:29:41
2021-03-15T05:59:37
2021-03-15T05:59:37
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
false
803,826,506
https://api.github.com/repos/huggingface/datasets/issues/1848
https://github.com/huggingface/datasets/pull/1848
1,848
Refactoring: Create config module
closed
0
2021-02-08T18:43:51
2021-02-10T12:29:35
2021-02-10T12:29:35
albertvillanova
[]
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
true
803,824,694
https://api.github.com/repos/huggingface/datasets/issues/1847
https://github.com/huggingface/datasets/pull/1847
1,847
[Metrics] Add word error metric metric
closed
1
2021-02-08T18:41:15
2021-02-09T17:53:21
2021-02-09T17:53:21
patrickvonplaten
[]
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
true
803,806,380
https://api.github.com/repos/huggingface/datasets/issues/1846
https://github.com/huggingface/datasets/pull/1846
1,846
Make DownloadManager downloaded/extracted paths accessible
closed
3
2021-02-08T18:14:42
2021-02-25T14:10:18
2021-02-25T14:10:18
albertvillanova
[]
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
true
803,714,493
https://api.github.com/repos/huggingface/datasets/issues/1845
https://github.com/huggingface/datasets/pull/1845
1,845
Enable logging propagation and remove logging handler
closed
1
2021-02-08T16:22:13
2021-02-09T14:22:38
2021-02-09T14:22:37
lhoestq
[]
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also re...
true
803,588,125
https://api.github.com/repos/huggingface/datasets/issues/1844
https://github.com/huggingface/datasets/issues/1844
1,844
Update Open Subtitles corpus with original sentence IDs
closed
6
2021-02-08T13:55:13
2021-02-12T17:38:58
2021-02-12T17:38:58
Valahaar
[ "good first issue" ]
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
false
803,565,393
https://api.github.com/repos/huggingface/datasets/issues/1843
https://github.com/huggingface/datasets/issues/1843
1,843
MustC Speech Translation
open
18
2021-02-08T13:27:45
2021-05-14T14:53:34
null
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
false
803,563,149
https://api.github.com/repos/huggingface/datasets/issues/1842
https://github.com/huggingface/datasets/issues/1842
1,842
Add AMI Corpus
closed
3
2021-02-08T13:25:00
2023-02-28T16:29:22
2023-02-28T16:29:22
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic...
false
803,561,123
https://api.github.com/repos/huggingface/datasets/issues/1841
https://github.com/huggingface/datasets/issues/1841
1,841
Add ljspeech
closed
0
2021-02-08T13:22:26
2021-03-15T05:59:02
2021-03-15T05:59:02
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap...
false
803,560,039
https://api.github.com/repos/huggingface/datasets/issues/1840
https://github.com/huggingface/datasets/issues/1840
1,840
Add common voice
closed
11
2021-02-08T13:21:05
2022-03-20T15:23:40
2021-03-15T05:56:21
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/dat...
false
803,559,164
https://api.github.com/repos/huggingface/datasets/issues/1839
https://github.com/huggingface/datasets/issues/1839
1,839
Add Voxforge
open
0
2021-02-08T13:19:56
2021-02-08T13:28:31
null
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant...
false
803,557,521
https://api.github.com/repos/huggingface/datasets/issues/1838
https://github.com/huggingface/datasets/issues/1838
1,838
Add tedlium
closed
2
2021-02-08T13:17:52
2022-10-04T14:34:12
2022-10-04T14:34:12
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51...
false
803,555,650
https://api.github.com/repos/huggingface/datasets/issues/1837
https://github.com/huggingface/datasets/issues/1837
1,837
Add VCTK
closed
2
2021-02-08T13:15:28
2021-12-28T15:05:08
2021-12-28T15:05:08
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch...
false
803,531,837
https://api.github.com/repos/huggingface/datasets/issues/1836
https://github.com/huggingface/datasets/issues/1836
1,836
test.json has been removed from the limit dataset repo (breaks dataset)
closed
1
2021-02-08T12:45:53
2021-02-10T16:14:58
2021-02-10T16:14:58
Paethon
[ "dataset bug" ]
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd...
false
803,524,790
https://api.github.com/repos/huggingface/datasets/issues/1835
https://github.com/huggingface/datasets/issues/1835
1,835
Add CHiME4 dataset
open
5
2021-02-08T12:36:38
2025-01-26T16:18:59
null
patrickvonplaten
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape...
false
803,517,094
https://api.github.com/repos/huggingface/datasets/issues/1834
https://github.com/huggingface/datasets/pull/1834
1,834
Fixes base_url of limit dataset
closed
1
2021-02-08T12:26:35
2021-02-08T12:42:50
2021-02-08T12:42:50
Paethon
[]
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
true
803,120,978
https://api.github.com/repos/huggingface/datasets/issues/1833
https://github.com/huggingface/datasets/pull/1833
1,833
Add OSCAR dataset card
closed
10
2021-02-08T01:39:49
2021-02-12T14:09:25
2021-02-12T14:08:24
pjox
[]
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
true
802,880,897
https://api.github.com/repos/huggingface/datasets/issues/1832
https://github.com/huggingface/datasets/issues/1832
1,832
Looks like nokogumbo is up-to-date now, so this is no longer needed.
closed
0
2021-02-07T06:52:07
2021-02-08T17:27:29
2021-02-08T17:27:29
JimmyJim1
[]
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
false
802,868,854
https://api.github.com/repos/huggingface/datasets/issues/1831
https://github.com/huggingface/datasets/issues/1831
1,831
Some question about raw dataset download info in the project .
closed
4
2021-02-07T05:33:36
2021-02-25T14:10:18
2021-02-25T14:10:18
svjack
[]
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
false
802,790,075
https://api.github.com/repos/huggingface/datasets/issues/1830
https://github.com/huggingface/datasets/issues/1830
1,830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
open
9
2021-02-06T21:00:26
2021-02-24T21:56:14
null
wumpusman
[]
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
false
802,693,600
https://api.github.com/repos/huggingface/datasets/issues/1829
https://github.com/huggingface/datasets/pull/1829
1,829
Add Tweet Eval Dataset
closed
0
2021-02-06T12:36:25
2021-02-08T13:17:54
2021-02-08T13:17:53
gchhablani
[]
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/...
true
802,449,234
https://api.github.com/repos/huggingface/datasets/issues/1828
https://github.com/huggingface/datasets/pull/1828
1,828
Add CelebA Dataset
closed
9
2021-02-05T20:20:55
2021-02-18T14:17:07
2021-02-18T14:17:07
gchhablani
[]
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]...
true
802,353,974
https://api.github.com/repos/huggingface/datasets/issues/1827
https://github.com/huggingface/datasets/issues/1827
1,827
Regarding On-the-fly Data Loading
closed
4
2021-02-05T17:43:48
2021-02-18T13:55:16
2021-02-18T13:55:16
gchhablani
[]
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
false
802,074,744
https://api.github.com/repos/huggingface/datasets/issues/1826
https://github.com/huggingface/datasets/pull/1826
1,826
Print error message with filename when malformed CSV
closed
0
2021-02-05T11:07:59
2021-02-09T17:39:27
2021-02-09T17:39:27
albertvillanova
[]
Print error message specifying filename when malformed CSV file. Close #1821
true
802,073,925
https://api.github.com/repos/huggingface/datasets/issues/1825
https://github.com/huggingface/datasets/issues/1825
1,825
Datasets library not suitable for huge text datasets.
closed
5
2021-02-05T11:06:50
2021-03-30T14:04:01
2021-03-16T09:44:00
avacaondata
[]
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
false
802,048,281
https://api.github.com/repos/huggingface/datasets/issues/1824
https://github.com/huggingface/datasets/pull/1824
1,824
Add OSCAR dataset card
closed
3
2021-02-05T10:30:26
2021-05-05T18:24:14
2021-02-08T11:30:33
lhoestq
[]
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB....
true
802,042,181
https://api.github.com/repos/huggingface/datasets/issues/1823
https://github.com/huggingface/datasets/pull/1823
1,823
Add FewRel Dataset
closed
11
2021-02-05T10:22:03
2021-03-01T11:56:20
2021-03-01T10:21:39
gchhablani
[]
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key...
true
802,003,835
https://api.github.com/repos/huggingface/datasets/issues/1822
https://github.com/huggingface/datasets/pull/1822
1,822
Add Hindi Discourse Analysis Natural Language Inference Dataset
closed
2
2021-02-05T09:30:54
2021-02-15T09:57:39
2021-02-15T09:57:39
avinsit123
[]
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#dat...
true
801,747,647
https://api.github.com/repos/huggingface/datasets/issues/1821
https://github.com/huggingface/datasets/issues/1821
1,821
Provide better exception message when one of many files results in an exception
closed
1
2021-02-05T00:49:03
2021-02-09T17:39:27
2021-02-09T17:39:27
david-waterworth
[]
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no dat...
false
801,529,936
https://api.github.com/repos/huggingface/datasets/issues/1820
https://github.com/huggingface/datasets/pull/1820
1,820
Add metrics usage examples and tests
closed
0
2021-02-04T18:23:50
2021-02-05T14:00:01
2021-02-05T14:00:00
lhoestq
[]
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do...
true
801,448,670
https://api.github.com/repos/huggingface/datasets/issues/1819
https://github.com/huggingface/datasets/pull/1819
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
closed
0
2021-02-04T16:36:46
2021-02-04T16:52:27
2021-02-04T16:52:26
philschmid
[]
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
true
800,958,776
https://api.github.com/repos/huggingface/datasets/issues/1818
https://github.com/huggingface/datasets/issues/1818
1,818
Loading local dataset raise requests.exceptions.ConnectTimeout
closed
1
2021-02-04T05:55:23
2022-06-01T15:38:42
2022-06-01T15:38:42
Alxe1
[]
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Us...
false
800,870,652
https://api.github.com/repos/huggingface/datasets/issues/1817
https://github.com/huggingface/datasets/issues/1817
1,817
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
closed
2
2021-02-04T02:30:23
2022-10-05T12:42:57
2022-10-05T12:42:57
LuCeHe
[]
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/maste...
false
800,660,995
https://api.github.com/repos/huggingface/datasets/issues/1816
https://github.com/huggingface/datasets/pull/1816
1,816
Doc2dial rc update to latest version
closed
1
2021-02-03T20:08:54
2021-02-15T15:15:24
2021-02-15T15:04:33
songfeng
[]
true
800,610,017
https://api.github.com/repos/huggingface/datasets/issues/1815
https://github.com/huggingface/datasets/pull/1815
1,815
Add CCAligned Multilingual Dataset
closed
7
2021-02-03T18:59:52
2021-03-01T12:33:03
2021-03-01T10:36:21
gchhablani
[]
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo...
true
800,516,236
https://api.github.com/repos/huggingface/datasets/issues/1814
https://github.com/huggingface/datasets/pull/1814
1,814
Add Freebase QA Dataset
closed
1
2021-02-03T16:57:49
2021-02-04T19:47:51
2021-02-04T16:21:48
gchhablani
[]
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
true
800,435,973
https://api.github.com/repos/huggingface/datasets/issues/1813
https://github.com/huggingface/datasets/pull/1813
1,813
Support future datasets
closed
0
2021-02-03T15:26:49
2021-02-05T10:33:48
2021-02-05T10:33:47
lhoestq
[]
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version. However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak...
true
799,379,178
https://api.github.com/repos/huggingface/datasets/issues/1812
https://github.com/huggingface/datasets/pull/1812
1,812
Add CIFAR-100 Dataset
closed
2
2021-02-02T15:22:59
2021-02-08T11:10:18
2021-02-08T10:39:06
gchhablani
[]
Adding CIFAR-100 Dataset.
true
799,211,060
https://api.github.com/repos/huggingface/datasets/issues/1811
https://github.com/huggingface/datasets/issues/1811
1,811
Unable to add Multi-label Datasets
closed
4
2021-02-02T11:50:56
2021-02-18T14:16:31
2021-02-18T14:16:31
gchhablani
[]
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse...
false
799,168,650
https://api.github.com/repos/huggingface/datasets/issues/1810
https://github.com/huggingface/datasets/issues/1810
1,810
Add Hateful Memes Dataset
open
4
2021-02-02T10:53:59
2021-12-08T12:03:59
null
gchhablani
[ "dataset request", "vision" ]
## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf) - **Data:** [Thi...
false
799,059,141
https://api.github.com/repos/huggingface/datasets/issues/1809
https://github.com/huggingface/datasets/pull/1809
1,809
Add FreebaseQA dataset
closed
6
2021-02-02T08:35:53
2021-02-03T17:15:05
2021-02-03T16:43:06
gchhablani
[]
Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR. Requesting @lhoestq to review.
true
798,879,180
https://api.github.com/repos/huggingface/datasets/issues/1808
https://github.com/huggingface/datasets/issues/1808
1,808
writing Datasets in a human readable format
closed
3
2021-02-02T02:55:40
2022-06-01T15:38:13
2022-06-01T15:38:13
ghost
[ "enhancement", "question" ]
Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
false
798,823,591
https://api.github.com/repos/huggingface/datasets/issues/1807
https://github.com/huggingface/datasets/pull/1807
1,807
Adding an aggregated dataset for the GEM benchmark
closed
1
2021-02-02T00:39:53
2021-02-02T22:48:41
2021-02-02T18:06:58
yjernite
[]
This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation) The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar...
true
798,607,869
https://api.github.com/repos/huggingface/datasets/issues/1806
https://github.com/huggingface/datasets/pull/1806
1,806
Update details to MLSUM dataset
closed
1
2021-02-01T18:35:12
2021-02-01T18:46:28
2021-02-01T18:46:21
padipadou
[]
Update details to MLSUM dataset
true
798,498,053
https://api.github.com/repos/huggingface/datasets/issues/1805
https://github.com/huggingface/datasets/issues/1805
1,805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
closed
2
2021-02-01T16:14:17
2021-03-06T14:32:46
2021-03-06T14:32:46
abarbosa94
[]
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of ...
false
798,483,881
https://api.github.com/repos/huggingface/datasets/issues/1804
https://github.com/huggingface/datasets/pull/1804
1,804
Add SICK dataset
closed
0
2021-02-01T15:57:44
2021-02-05T17:46:28
2021-02-05T15:49:25
calpt
[]
Adds the SICK dataset (http://marcobaroni.org/composes/sick.html). Closes #1772. Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.
true
798,243,904
https://api.github.com/repos/huggingface/datasets/issues/1803
https://github.com/huggingface/datasets/issues/1803
1,803
Querying examples from big datasets is slower than small datasets
closed
8
2021-02-01T11:08:23
2021-08-04T18:11:01
2021-08-04T18:10:42
lhoestq
[]
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
false
797,924,468
https://api.github.com/repos/huggingface/datasets/issues/1802
https://github.com/huggingface/datasets/pull/1802
1,802
add github of contributors
closed
3
2021-02-01T03:49:19
2021-02-03T10:09:52
2021-02-03T10:06:30
thevasudevgupta
[]
This PR will add contributors GitHub id at the end of every dataset cards.
true
797,814,275
https://api.github.com/repos/huggingface/datasets/issues/1801
https://github.com/huggingface/datasets/pull/1801
1,801
[GEM] Updated the source link of the data to update correct tokenized version.
closed
2
2021-01-31T21:17:19
2021-02-02T13:17:38
2021-02-02T13:17:28
mounicam
[]
true
797,798,689
https://api.github.com/repos/huggingface/datasets/issues/1800
https://github.com/huggingface/datasets/pull/1800
1,800
Add DuoRC Dataset
closed
1
2021-01-31T20:01:59
2021-02-03T05:01:45
2021-02-02T22:49:26
gchhablani
[]
Hi, DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or...
true
797,789,439
https://api.github.com/repos/huggingface/datasets/issues/1799
https://github.com/huggingface/datasets/pull/1799
1,799
Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c…
closed
1
2021-01-31T19:18:55
2021-02-09T22:06:13
2021-02-09T15:49:58
gmihaila
[]
This is a dataset I currently use my research and I realized some features are not being returned. Previous code was not using all available metadata and was kind of messy I fixed code to use all metadata and made some modification to be more efficient and better formatted. Please let me know if I need to ma...
true
797,766,818
https://api.github.com/repos/huggingface/datasets/issues/1798
https://github.com/huggingface/datasets/pull/1798
1,798
Add Arabic sarcasm dataset
closed
1
2021-01-31T17:38:55
2021-02-10T20:39:13
2021-02-03T10:35:54
mapmeld
[]
This MIT license dataset: https://github.com/iabufarha/ArSarcasm Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/
true
797,357,901
https://api.github.com/repos/huggingface/datasets/issues/1797
https://github.com/huggingface/datasets/issues/1797
1,797
Connection error
closed
1
2021-01-30T07:32:45
2021-08-04T18:09:37
2021-08-04T18:09:37
smile0925
[]
Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py`
false
797,329,905
https://api.github.com/repos/huggingface/datasets/issues/1796
https://github.com/huggingface/datasets/issues/1796
1,796
Filter on dataset too much slowww
open
12
2021-01-30T04:09:19
2025-05-15T13:19:55
null
ayubSubhaniya
[]
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
false
797,021,730
https://api.github.com/repos/huggingface/datasets/issues/1795
https://github.com/huggingface/datasets/pull/1795
1,795
Custom formatting for lazy map + arrow data extraction refactor
closed
8
2021-01-29T16:35:53
2022-07-30T09:50:11
2021-02-05T09:54:06
lhoestq
[]
Hi ! This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions. While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p...
true
796,975,588
https://api.github.com/repos/huggingface/datasets/issues/1794
https://github.com/huggingface/datasets/pull/1794
1,794
Move silicone directory
closed
0
2021-01-29T15:33:15
2021-01-29T16:31:39
2021-01-29T16:31:38
lhoestq
[]
The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets
true
796,940,299
https://api.github.com/repos/huggingface/datasets/issues/1793
https://github.com/huggingface/datasets/pull/1793
1,793
Minor fix the docstring of load_metric
closed
0
2021-01-29T14:47:35
2021-01-29T16:53:32
2021-01-29T16:53:32
albertvillanova
[]
Minor fix: - duplicated attributes - format fix
true
796,934,627
https://api.github.com/repos/huggingface/datasets/issues/1792
https://github.com/huggingface/datasets/pull/1792
1,792
Allow loading dataset in-memory
closed
3
2021-01-29T14:39:50
2021-02-12T14:13:28
2021-02-12T14:13:28
albertvillanova
[]
Allow loading datasets either from: - memory-mapped file (current implementation) - from file descriptor, copying data to physical memory Close #708
true
796,924,519
https://api.github.com/repos/huggingface/datasets/issues/1791
https://github.com/huggingface/datasets/pull/1791
1,791
Small fix with corrected logging of train vectors
closed
0
2021-01-29T14:26:06
2021-01-29T18:51:10
2021-01-29T17:05:07
TezRomacH
[]
Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct
true
796,678,157
https://api.github.com/repos/huggingface/datasets/issues/1790
https://github.com/huggingface/datasets/issues/1790
1,790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
open
8
2021-01-29T08:17:24
2021-03-25T12:10:51
null
miyamonz
[]
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
false
796,229,721
https://api.github.com/repos/huggingface/datasets/issues/1789
https://github.com/huggingface/datasets/pull/1789
1,789
[BUG FIX] typo in the import path for metrics
closed
0
2021-01-28T18:01:37
2021-01-28T18:13:56
2021-01-28T18:13:56
yjernite
[]
This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics
true
795,544,422
https://api.github.com/repos/huggingface/datasets/issues/1788
https://github.com/huggingface/datasets/pull/1788
1,788
Doc2dial rc
closed
0
2021-01-27T23:51:00
2021-01-28T18:46:13
2021-01-28T18:46:13
songfeng
[]
true
795,485,842
https://api.github.com/repos/huggingface/datasets/issues/1787
https://github.com/huggingface/datasets/pull/1787
1,787
Update the CommonGen citation information
closed
0
2021-01-27T22:12:47
2021-01-28T13:56:29
2021-01-28T13:56:29
yuchenlin
[]
true
795,462,816
https://api.github.com/repos/huggingface/datasets/issues/1786
https://github.com/huggingface/datasets/issues/1786
1,786
How to use split dataset
closed
2
2021-01-27T21:37:47
2021-04-23T15:17:39
2021-04-23T15:17:39
kkhan188
[ "question" ]
![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro...
false
795,458,856
https://api.github.com/repos/huggingface/datasets/issues/1785
https://github.com/huggingface/datasets/issues/1785
1,785
Not enough disk space (Needed: Unknown size) when caching on a cluster
closed
9
2021-01-27T21:30:59
2024-12-04T02:57:00
2021-01-30T01:07:56
olinguyen
[]
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
false
794,659,174
https://api.github.com/repos/huggingface/datasets/issues/1784
https://github.com/huggingface/datasets/issues/1784
1,784
JSONDecodeError on JSON with multiple lines
closed
2
2021-01-27T00:19:22
2021-01-31T08:47:18
2021-01-31T08:47:18
gchhablani
[]
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with th...
false
794,544,495
https://api.github.com/repos/huggingface/datasets/issues/1783
https://github.com/huggingface/datasets/issues/1783
1,783
Dataset Examples Explorer
closed
2
2021-01-26T20:39:02
2021-02-01T13:58:44
2021-02-01T13:58:44
ChewKokWah
[]
In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ...
false
794,167,920
https://api.github.com/repos/huggingface/datasets/issues/1782
https://github.com/huggingface/datasets/pull/1782
1,782
Update pyarrow import warning
closed
0
2021-01-26T11:47:11
2021-01-26T13:50:50
2021-01-26T13:50:49
lhoestq
[]
Update the minimum version to >=0.17.1 in the pyarrow version check and update the message. I also moved the check at the top of the __init__.py
true
793,914,556
https://api.github.com/repos/huggingface/datasets/issues/1781
https://github.com/huggingface/datasets/issues/1781
1,781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
closed
9
2021-01-26T04:18:35
2024-07-07T17:55:12
2022-10-05T12:37:06
PalaashAgrawal
[]
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
false
793,882,132
https://api.github.com/repos/huggingface/datasets/issues/1780
https://github.com/huggingface/datasets/pull/1780
1,780
Update SciFact URL
closed
7
2021-01-26T02:49:06
2021-01-28T18:48:00
2021-01-28T10:19:45
dwadden
[]
Hi, I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset! Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re...
true
793,539,703
https://api.github.com/repos/huggingface/datasets/issues/1779
https://github.com/huggingface/datasets/pull/1779
1,779
Ignore definition line number of functions for caching
closed
0
2021-01-25T16:42:29
2021-01-26T10:20:20
2021-01-26T10:20:19
lhoestq
[]
As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything. This is because we were not ignoring the line number definition f...
true
793,474,507
https://api.github.com/repos/huggingface/datasets/issues/1778
https://github.com/huggingface/datasets/pull/1778
1,778
Narrative QA Manual
closed
6
2021-01-25T15:22:31
2021-01-29T09:35:14
2021-01-29T09:34:51
rsanjaykamath
[]
Submitting the manual version of Narrative QA script which requires a manual download from the original repository
true
793,273,770
https://api.github.com/repos/huggingface/datasets/issues/1777
https://github.com/huggingface/datasets/issues/1777
1,777
GPT2 MNLI training using run_glue.py
closed
0
2021-01-25T10:53:52
2021-01-25T11:12:53
2021-01-25T11:12:53
nlp-student
[]
Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets` Running this on Google Colab, ``` !python run_glue.py \ --model_name_or_path gpt2 \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 10 \ --gradient_accu...
false
792,755,249
https://api.github.com/repos/huggingface/datasets/issues/1776
https://github.com/huggingface/datasets/issues/1776
1,776
[Question & Bug Report] Can we preprocess a dataset on the fly?
closed
6
2021-01-24T09:28:24
2021-05-20T04:15:58
2021-05-20T04:15:58
shuaihuaiyi
[]
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
false
792,742,120
https://api.github.com/repos/huggingface/datasets/issues/1775
https://github.com/huggingface/datasets/issues/1775
1,775
Efficient ways to iterate the dataset
closed
2
2021-01-24T07:54:31
2021-01-24T09:50:39
2021-01-24T09:50:39
zhongpeixiang
[]
For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks
false
792,730,559
https://api.github.com/repos/huggingface/datasets/issues/1774
https://github.com/huggingface/datasets/issues/1774
1,774
is it possible to make slice to be more compatible like python list and numpy?
closed
2
2021-01-24T06:15:52
2024-01-31T15:54:18
2024-01-31T15:54:18
world2vec
[]
Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ```
false
792,708,160
https://api.github.com/repos/huggingface/datasets/issues/1773
https://github.com/huggingface/datasets/issues/1773
1,773
bug in loading datasets
closed
3
2021-01-24T02:53:45
2021-09-06T08:54:46
2021-08-04T18:13:01
ghost
[]
Hi, I need to load a dataset, I use these commands: ``` from datasets import load_dataset dataset = load_dataset('csv', data_files={'train': 'sick/train.csv', 'test': 'sick/test.csv', 'validation': 'sick/validation.csv'}) prin...
false
792,703,797
https://api.github.com/repos/huggingface/datasets/issues/1772
https://github.com/huggingface/datasets/issues/1772
1,772
Adding SICK dataset
closed
0
2021-01-24T02:15:31
2021-02-05T15:49:25
2021-02-05T15:49:25
ghost
[ "dataset request" ]
Hi It would be great to include SICK dataset. ## Adding a Dataset - **Name:** SICK - **Description:** a well known entailment dataset - **Paper:** http://marcobaroni.org/composes/sick.html - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** this is an important NLI benchmark Instruction...
false
792,701,276
https://api.github.com/repos/huggingface/datasets/issues/1771
https://github.com/huggingface/datasets/issues/1771
1,771
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
closed
3
2021-01-24T01:53:52
2021-01-24T23:06:29
2021-01-24T23:06:29
world2vec
[]
Hi, When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset? ``` Traceback (most recent call last): File "/home/tom/pyenv/pystory/lib/python3.6/site-p...
false
792,698,148
https://api.github.com/repos/huggingface/datasets/issues/1770
https://github.com/huggingface/datasets/issues/1770
1,770
how can I combine 2 dataset with different/same features?
closed
3
2021-01-24T01:26:06
2022-06-01T15:43:15
2022-06-01T15:43:15
world2vec
[]
to combine 2 dataset by one-one map like ds = zip(ds1, ds2): ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'} or different feature: ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
false
792,523,284
https://api.github.com/repos/huggingface/datasets/issues/1769
https://github.com/huggingface/datasets/issues/1769
1,769
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
closed
4
2021-01-23T10:13:00
2022-10-05T12:38:51
2022-10-05T12:38:51
shuaihuaiyi
[]
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine. The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py Script args: ``` --model_name_or_path ../../../model/chine...
false
792,150,745
https://api.github.com/repos/huggingface/datasets/issues/1768
https://github.com/huggingface/datasets/pull/1768
1,768
Mention kwargs in the Dataset Formatting docs
closed
0
2021-01-22T16:43:20
2021-01-31T12:33:10
2021-01-25T09:14:59
gchhablani
[]
Hi, This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. To prevent people from having to check the code/method docs, I just added a couple of lines in the docs. Please let me know your thoughts on this. Thanks, Gunjan @lho...
true
792,068,497
https://api.github.com/repos/huggingface/datasets/issues/1767
https://github.com/huggingface/datasets/pull/1767
1,767
Add Librispeech ASR
closed
1
2021-01-22T14:54:37
2021-01-25T20:38:07
2021-01-25T20:37:42
patrickvonplaten
[]
This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360". As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f...
true
792,044,105
https://api.github.com/repos/huggingface/datasets/issues/1766
https://github.com/huggingface/datasets/issues/1766
1,766
Issues when run two programs compute the same metrics
closed
2
2021-01-22T14:22:55
2021-02-02T10:38:06
2021-02-02T10:38:06
lamthuy
[]
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches: ``` File "train_matching_min.py", line 160, in <module>ch...
false
791,553,065
https://api.github.com/repos/huggingface/datasets/issues/1765
https://github.com/huggingface/datasets/issues/1765
1,765
Error iterating over Dataset with DataLoader
closed
6
2021-01-21T22:56:45
2022-10-28T02:16:38
2021-01-23T03:44:14
EvanZ
[]
I have a Dataset that I've mapped a tokenizer over: ``` encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids']) encoded_dataset[:1] ``` ``` {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2...
false