id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
803,555,650
1,837
Add VCTK
closed
[]
2021-02-08T13:15:28
2021-12-28T15:05:08
2021-12-28T15:05:08
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1837
null
false
803,531,837
1,836
test.json has been removed from the limit dataset repo (breaks dataset)
closed
[]
2021-02-08T12:45:53
2021-02-10T16:14:58
2021-02-10T16:14:58
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd...
Paethon
https://github.com/huggingface/datasets/issues/1836
null
false
803,524,790
1,835
Add CHiME4 dataset
open
[]
2021-02-08T12:36:38
2025-01-26T16:18:59
null
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape...
patrickvonplaten
https://github.com/huggingface/datasets/issues/1835
null
false
803,517,094
1,834
Fixes base_url of limit dataset
closed
[]
2021-02-08T12:26:35
2021-02-08T12:42:50
2021-02-08T12:42:50
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
Paethon
https://github.com/huggingface/datasets/pull/1834
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834", "html_url": "https://github.com/huggingface/datasets/pull/1834", "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "merged_at": null }
true
803,120,978
1,833
Add OSCAR dataset card
closed
[]
2021-02-08T01:39:49
2021-02-12T14:09:25
2021-02-12T14:08:24
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
pjox
https://github.com/huggingface/datasets/pull/1833
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833", "html_url": "https://github.com/huggingface/datasets/pull/1833", "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "merged_at": "2021-02-12T14:08...
true
802,880,897
1,832
Looks like nokogumbo is up-to-date now, so this is no longer needed.
closed
[]
2021-02-07T06:52:07
2021-02-08T17:27:29
2021-02-08T17:27:29
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
JimmyJim1
https://github.com/huggingface/datasets/issues/1832
null
false
802,868,854
1,831
Some question about raw dataset download info in the project .
closed
[]
2021-02-07T05:33:36
2021-02-25T14:10:18
2021-02-25T14:10:18
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
svjack
https://github.com/huggingface/datasets/issues/1831
null
false
802,790,075
1,830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
open
[]
2021-02-06T21:00:26
2021-02-24T21:56:14
null
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
wumpusman
https://github.com/huggingface/datasets/issues/1830
null
false
802,693,600
1,829
Add Tweet Eval Dataset
closed
[]
2021-02-06T12:36:25
2021-02-08T13:17:54
2021-02-08T13:17:53
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/...
gchhablani
https://github.com/huggingface/datasets/pull/1829
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829", "html_url": "https://github.com/huggingface/datasets/pull/1829", "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "merged_at": "2021-02-08T13:17...
true
802,449,234
1,828
Add CelebA Dataset
closed
[]
2021-02-05T20:20:55
2021-02-18T14:17:07
2021-02-18T14:17:07
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]...
gchhablani
https://github.com/huggingface/datasets/pull/1828
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828", "html_url": "https://github.com/huggingface/datasets/pull/1828", "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "merged_at": null }
true
802,353,974
1,827
Regarding On-the-fly Data Loading
closed
[]
2021-02-05T17:43:48
2021-02-18T13:55:16
2021-02-18T13:55:16
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
gchhablani
https://github.com/huggingface/datasets/issues/1827
null
false
802,074,744
1,826
Print error message with filename when malformed CSV
closed
[]
2021-02-05T11:07:59
2021-02-09T17:39:27
2021-02-09T17:39:27
Print error message specifying filename when malformed CSV file. Close #1821
albertvillanova
https://github.com/huggingface/datasets/pull/1826
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826", "html_url": "https://github.com/huggingface/datasets/pull/1826", "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "merged_at": "2021-02-09T17:39...
true
802,073,925
1,825
Datasets library not suitable for huge text datasets.
closed
[]
2021-02-05T11:06:50
2021-03-30T14:04:01
2021-03-16T09:44:00
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
avacaondata
https://github.com/huggingface/datasets/issues/1825
null
false
802,048,281
1,824
Add OSCAR dataset card
closed
[]
2021-02-05T10:30:26
2021-05-05T18:24:14
2021-02-08T11:30:33
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB....
lhoestq
https://github.com/huggingface/datasets/pull/1824
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824", "html_url": "https://github.com/huggingface/datasets/pull/1824", "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "merged_at": null }
true
802,042,181
1,823
Add FewRel Dataset
closed
[]
2021-02-05T10:22:03
2021-03-01T11:56:20
2021-03-01T10:21:39
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key...
gchhablani
https://github.com/huggingface/datasets/pull/1823
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823", "html_url": "https://github.com/huggingface/datasets/pull/1823", "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "merged_at": "2021-03-01T10:21...
true
802,003,835
1,822
Add Hindi Discourse Analysis Natural Language Inference Dataset
closed
[]
2021-02-05T09:30:54
2021-02-15T09:57:39
2021-02-15T09:57:39
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#dat...
avinsit123
https://github.com/huggingface/datasets/pull/1822
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822", "html_url": "https://github.com/huggingface/datasets/pull/1822", "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "merged_at": "2021-02-15T09:57...
true
801,747,647
1,821
Provide better exception message when one of many files results in an exception
closed
[]
2021-02-05T00:49:03
2021-02-09T17:39:27
2021-02-09T17:39:27
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no dat...
david-waterworth
https://github.com/huggingface/datasets/issues/1821
null
false
801,529,936
1,820
Add metrics usage examples and tests
closed
[]
2021-02-04T18:23:50
2021-02-05T14:00:01
2021-02-05T14:00:00
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do...
lhoestq
https://github.com/huggingface/datasets/pull/1820
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1820", "html_url": "https://github.com/huggingface/datasets/pull/1820", "diff_url": "https://github.com/huggingface/datasets/pull/1820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1820.patch", "merged_at": "2021-02-05T14:00...
true
801,448,670
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
closed
[]
2021-02-04T16:36:46
2021-02-04T16:52:27
2021-02-04T16:52:26
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
philschmid
https://github.com/huggingface/datasets/pull/1819
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819", "html_url": "https://github.com/huggingface/datasets/pull/1819", "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "merged_at": "2021-02-04T16:52...
true
800,958,776
1,818
Loading local dataset raise requests.exceptions.ConnectTimeout
closed
[]
2021-02-04T05:55:23
2022-06-01T15:38:42
2022-06-01T15:38:42
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Us...
Alxe1
https://github.com/huggingface/datasets/issues/1818
null
false
800,870,652
1,817
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
closed
[]
2021-02-04T02:30:23
2022-10-05T12:42:57
2022-10-05T12:42:57
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/maste...
LuCeHe
https://github.com/huggingface/datasets/issues/1817
null
false
800,660,995
1,816
Doc2dial rc update to latest version
closed
[]
2021-02-03T20:08:54
2021-02-15T15:15:24
2021-02-15T15:04:33
songfeng
https://github.com/huggingface/datasets/pull/1816
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1816", "html_url": "https://github.com/huggingface/datasets/pull/1816", "diff_url": "https://github.com/huggingface/datasets/pull/1816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1816.patch", "merged_at": "2021-02-15T15:04...
true
800,610,017
1,815
Add CCAligned Multilingual Dataset
closed
[]
2021-02-03T18:59:52
2021-03-01T12:33:03
2021-03-01T10:36:21
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo...
gchhablani
https://github.com/huggingface/datasets/pull/1815
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815", "html_url": "https://github.com/huggingface/datasets/pull/1815", "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "merged_at": "2021-03-01T10:36...
true
800,516,236
1,814
Add Freebase QA Dataset
closed
[]
2021-02-03T16:57:49
2021-02-04T19:47:51
2021-02-04T16:21:48
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
gchhablani
https://github.com/huggingface/datasets/pull/1814
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1814", "html_url": "https://github.com/huggingface/datasets/pull/1814", "diff_url": "https://github.com/huggingface/datasets/pull/1814.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1814.patch", "merged_at": "2021-02-04T16:21...
true
800,435,973
1,813
Support future datasets
closed
[]
2021-02-03T15:26:49
2021-02-05T10:33:48
2021-02-05T10:33:47
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version. However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak...
lhoestq
https://github.com/huggingface/datasets/pull/1813
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1813", "html_url": "https://github.com/huggingface/datasets/pull/1813", "diff_url": "https://github.com/huggingface/datasets/pull/1813.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1813.patch", "merged_at": "2021-02-05T10:33...
true
799,379,178
1,812
Add CIFAR-100 Dataset
closed
[]
2021-02-02T15:22:59
2021-02-08T11:10:18
2021-02-08T10:39:06
Adding CIFAR-100 Dataset.
gchhablani
https://github.com/huggingface/datasets/pull/1812
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1812", "html_url": "https://github.com/huggingface/datasets/pull/1812", "diff_url": "https://github.com/huggingface/datasets/pull/1812.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1812.patch", "merged_at": "2021-02-08T10:39...
true
799,211,060
1,811
Unable to add Multi-label Datasets
closed
[]
2021-02-02T11:50:56
2021-02-18T14:16:31
2021-02-18T14:16:31
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse...
gchhablani
https://github.com/huggingface/datasets/issues/1811
null
false
799,168,650
1,810
Add Hateful Memes Dataset
open
[]
2021-02-02T10:53:59
2021-12-08T12:03:59
null
## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf) - **Data:** [Thi...
gchhablani
https://github.com/huggingface/datasets/issues/1810
null
false
799,059,141
1,809
Add FreebaseQA dataset
closed
[]
2021-02-02T08:35:53
2021-02-03T17:15:05
2021-02-03T16:43:06
Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR. Requesting @lhoestq to review.
gchhablani
https://github.com/huggingface/datasets/pull/1809
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1809", "html_url": "https://github.com/huggingface/datasets/pull/1809", "diff_url": "https://github.com/huggingface/datasets/pull/1809.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1809.patch", "merged_at": null }
true
798,879,180
1,808
writing Datasets in a human readable format
closed
[]
2021-02-02T02:55:40
2022-06-01T15:38:13
2022-06-01T15:38:13
Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
ghost
https://github.com/huggingface/datasets/issues/1808
null
false
798,823,591
1,807
Adding an aggregated dataset for the GEM benchmark
closed
[]
2021-02-02T00:39:53
2021-02-02T22:48:41
2021-02-02T18:06:58
This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation) The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar...
yjernite
https://github.com/huggingface/datasets/pull/1807
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1807", "html_url": "https://github.com/huggingface/datasets/pull/1807", "diff_url": "https://github.com/huggingface/datasets/pull/1807.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1807.patch", "merged_at": "2021-02-02T18:06...
true
798,607,869
1,806
Update details to MLSUM dataset
closed
[]
2021-02-01T18:35:12
2021-02-01T18:46:28
2021-02-01T18:46:21
Update details to MLSUM dataset
padipadou
https://github.com/huggingface/datasets/pull/1806
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1806", "html_url": "https://github.com/huggingface/datasets/pull/1806", "diff_url": "https://github.com/huggingface/datasets/pull/1806.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1806.patch", "merged_at": "2021-02-01T18:46...
true
798,498,053
1,805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
closed
[]
2021-02-01T16:14:17
2021-03-06T14:32:46
2021-03-06T14:32:46
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of ...
abarbosa94
https://github.com/huggingface/datasets/issues/1805
null
false
798,483,881
1,804
Add SICK dataset
closed
[]
2021-02-01T15:57:44
2021-02-05T17:46:28
2021-02-05T15:49:25
Adds the SICK dataset (http://marcobaroni.org/composes/sick.html). Closes #1772. Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.
calpt
https://github.com/huggingface/datasets/pull/1804
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1804", "html_url": "https://github.com/huggingface/datasets/pull/1804", "diff_url": "https://github.com/huggingface/datasets/pull/1804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1804.patch", "merged_at": "2021-02-05T15:49...
true
798,243,904
1,803
Querying examples from big datasets is slower than small datasets
closed
[]
2021-02-01T11:08:23
2021-08-04T18:11:01
2021-08-04T18:10:42
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
lhoestq
https://github.com/huggingface/datasets/issues/1803
null
false
797,924,468
1,802
add github of contributors
closed
[]
2021-02-01T03:49:19
2021-02-03T10:09:52
2021-02-03T10:06:30
This PR will add contributors GitHub id at the end of every dataset cards.
thevasudevgupta
https://github.com/huggingface/datasets/pull/1802
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1802", "html_url": "https://github.com/huggingface/datasets/pull/1802", "diff_url": "https://github.com/huggingface/datasets/pull/1802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1802.patch", "merged_at": "2021-02-03T10:06...
true
797,814,275
1,801
[GEM] Updated the source link of the data to update correct tokenized version.
closed
[]
2021-01-31T21:17:19
2021-02-02T13:17:38
2021-02-02T13:17:28
mounicam
https://github.com/huggingface/datasets/pull/1801
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1801", "html_url": "https://github.com/huggingface/datasets/pull/1801", "diff_url": "https://github.com/huggingface/datasets/pull/1801.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1801.patch", "merged_at": null }
true
797,798,689
1,800
Add DuoRC Dataset
closed
[]
2021-01-31T20:01:59
2021-02-03T05:01:45
2021-02-02T22:49:26
Hi, DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or...
gchhablani
https://github.com/huggingface/datasets/pull/1800
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1800", "html_url": "https://github.com/huggingface/datasets/pull/1800", "diff_url": "https://github.com/huggingface/datasets/pull/1800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1800.patch", "merged_at": "2021-02-02T22:49...
true
797,789,439
1,799
Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c…
closed
[]
2021-01-31T19:18:55
2021-02-09T22:06:13
2021-02-09T15:49:58
This is a dataset I currently use my research and I realized some features are not being returned. Previous code was not using all available metadata and was kind of messy I fixed code to use all metadata and made some modification to be more efficient and better formatted. Please let me know if I need to ma...
gmihaila
https://github.com/huggingface/datasets/pull/1799
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1799", "html_url": "https://github.com/huggingface/datasets/pull/1799", "diff_url": "https://github.com/huggingface/datasets/pull/1799.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1799.patch", "merged_at": "2021-02-09T15:49...
true
797,766,818
1,798
Add Arabic sarcasm dataset
closed
[]
2021-01-31T17:38:55
2021-02-10T20:39:13
2021-02-03T10:35:54
This MIT license dataset: https://github.com/iabufarha/ArSarcasm Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/
mapmeld
https://github.com/huggingface/datasets/pull/1798
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1798", "html_url": "https://github.com/huggingface/datasets/pull/1798", "diff_url": "https://github.com/huggingface/datasets/pull/1798.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1798.patch", "merged_at": "2021-02-03T10:35...
true
797,357,901
1,797
Connection error
closed
[]
2021-01-30T07:32:45
2021-08-04T18:09:37
2021-08-04T18:09:37
Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py`
smile0925
https://github.com/huggingface/datasets/issues/1797
null
false
797,329,905
1,796
Filter on dataset too much slowww
open
[]
2021-01-30T04:09:19
2025-05-15T13:19:55
null
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
ayubSubhaniya
https://github.com/huggingface/datasets/issues/1796
null
false
797,021,730
1,795
Custom formatting for lazy map + arrow data extraction refactor
closed
[]
2021-01-29T16:35:53
2022-07-30T09:50:11
2021-02-05T09:54:06
Hi ! This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions. While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p...
lhoestq
https://github.com/huggingface/datasets/pull/1795
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1795", "html_url": "https://github.com/huggingface/datasets/pull/1795", "diff_url": "https://github.com/huggingface/datasets/pull/1795.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1795.patch", "merged_at": "2021-02-05T09:54...
true
796,975,588
1,794
Move silicone directory
closed
[]
2021-01-29T15:33:15
2021-01-29T16:31:39
2021-01-29T16:31:38
The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets
lhoestq
https://github.com/huggingface/datasets/pull/1794
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1794", "html_url": "https://github.com/huggingface/datasets/pull/1794", "diff_url": "https://github.com/huggingface/datasets/pull/1794.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1794.patch", "merged_at": "2021-01-29T16:31...
true
796,940,299
1,793
Minor fix the docstring of load_metric
closed
[]
2021-01-29T14:47:35
2021-01-29T16:53:32
2021-01-29T16:53:32
Minor fix: - duplicated attributes - format fix
albertvillanova
https://github.com/huggingface/datasets/pull/1793
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1793", "html_url": "https://github.com/huggingface/datasets/pull/1793", "diff_url": "https://github.com/huggingface/datasets/pull/1793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1793.patch", "merged_at": "2021-01-29T16:53...
true
796,934,627
1,792
Allow loading dataset in-memory
closed
[]
2021-01-29T14:39:50
2021-02-12T14:13:28
2021-02-12T14:13:28
Allow loading datasets either from: - memory-mapped file (current implementation) - from file descriptor, copying data to physical memory Close #708
albertvillanova
https://github.com/huggingface/datasets/pull/1792
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1792", "html_url": "https://github.com/huggingface/datasets/pull/1792", "diff_url": "https://github.com/huggingface/datasets/pull/1792.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1792.patch", "merged_at": "2021-02-12T14:13...
true
796,924,519
1,791
Small fix with corrected logging of train vectors
closed
[]
2021-01-29T14:26:06
2021-01-29T18:51:10
2021-01-29T17:05:07
Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct
TezRomacH
https://github.com/huggingface/datasets/pull/1791
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1791", "html_url": "https://github.com/huggingface/datasets/pull/1791", "diff_url": "https://github.com/huggingface/datasets/pull/1791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1791.patch", "merged_at": "2021-01-29T17:05...
true
796,678,157
1,790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
open
[]
2021-01-29T08:17:24
2021-03-25T12:10:51
null
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
miyamonz
https://github.com/huggingface/datasets/issues/1790
null
false
796,229,721
1,789
[BUG FIX] typo in the import path for metrics
closed
[]
2021-01-28T18:01:37
2021-01-28T18:13:56
2021-01-28T18:13:56
This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics
yjernite
https://github.com/huggingface/datasets/pull/1789
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1789", "html_url": "https://github.com/huggingface/datasets/pull/1789", "diff_url": "https://github.com/huggingface/datasets/pull/1789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1789.patch", "merged_at": "2021-01-28T18:13...
true
795,544,422
1,788
Doc2dial rc
closed
[]
2021-01-27T23:51:00
2021-01-28T18:46:13
2021-01-28T18:46:13
songfeng
https://github.com/huggingface/datasets/pull/1788
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1788", "html_url": "https://github.com/huggingface/datasets/pull/1788", "diff_url": "https://github.com/huggingface/datasets/pull/1788.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1788.patch", "merged_at": null }
true
795,485,842
1,787
Update the CommonGen citation information
closed
[]
2021-01-27T22:12:47
2021-01-28T13:56:29
2021-01-28T13:56:29
yuchenlin
https://github.com/huggingface/datasets/pull/1787
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1787", "html_url": "https://github.com/huggingface/datasets/pull/1787", "diff_url": "https://github.com/huggingface/datasets/pull/1787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1787.patch", "merged_at": "2021-01-28T13:56...
true
795,462,816
1,786
How to use split dataset
closed
[]
2021-01-27T21:37:47
2021-04-23T15:17:39
2021-04-23T15:17:39
![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro...
kkhan188
https://github.com/huggingface/datasets/issues/1786
null
false
795,458,856
1,785
Not enough disk space (Needed: Unknown size) when caching on a cluster
closed
[]
2021-01-27T21:30:59
2024-12-04T02:57:00
2021-01-30T01:07:56
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
olinguyen
https://github.com/huggingface/datasets/issues/1785
null
false
794,659,174
1,784
JSONDecodeError on JSON with multiple lines
closed
[]
2021-01-27T00:19:22
2021-01-31T08:47:18
2021-01-31T08:47:18
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with th...
gchhablani
https://github.com/huggingface/datasets/issues/1784
null
false
794,544,495
1,783
Dataset Examples Explorer
closed
[]
2021-01-26T20:39:02
2021-02-01T13:58:44
2021-02-01T13:58:44
In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ...
ChewKokWah
https://github.com/huggingface/datasets/issues/1783
null
false
794,167,920
1,782
Update pyarrow import warning
closed
[]
2021-01-26T11:47:11
2021-01-26T13:50:50
2021-01-26T13:50:49
Update the minimum version to >=0.17.1 in the pyarrow version check and update the message. I also moved the check at the top of the __init__.py
lhoestq
https://github.com/huggingface/datasets/pull/1782
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1782", "html_url": "https://github.com/huggingface/datasets/pull/1782", "diff_url": "https://github.com/huggingface/datasets/pull/1782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1782.patch", "merged_at": "2021-01-26T13:50...
true
793,914,556
1,781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
closed
[]
2021-01-26T04:18:35
2024-07-07T17:55:12
2022-10-05T12:37:06
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
PalaashAgrawal
https://github.com/huggingface/datasets/issues/1781
null
false
793,882,132
1,780
Update SciFact URL
closed
[]
2021-01-26T02:49:06
2021-01-28T18:48:00
2021-01-28T10:19:45
Hi, I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset! Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re...
dwadden
https://github.com/huggingface/datasets/pull/1780
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1780", "html_url": "https://github.com/huggingface/datasets/pull/1780", "diff_url": "https://github.com/huggingface/datasets/pull/1780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1780.patch", "merged_at": "2021-01-28T10:19...
true
793,539,703
1,779
Ignore definition line number of functions for caching
closed
[]
2021-01-25T16:42:29
2021-01-26T10:20:20
2021-01-26T10:20:19
As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything. This is because we were not ignoring the line number definition f...
lhoestq
https://github.com/huggingface/datasets/pull/1779
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1779", "html_url": "https://github.com/huggingface/datasets/pull/1779", "diff_url": "https://github.com/huggingface/datasets/pull/1779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1779.patch", "merged_at": "2021-01-26T10:20...
true
793,474,507
1,778
Narrative QA Manual
closed
[]
2021-01-25T15:22:31
2021-01-29T09:35:14
2021-01-29T09:34:51
Submitting the manual version of Narrative QA script which requires a manual download from the original repository
rsanjaykamath
https://github.com/huggingface/datasets/pull/1778
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1778", "html_url": "https://github.com/huggingface/datasets/pull/1778", "diff_url": "https://github.com/huggingface/datasets/pull/1778.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1778.patch", "merged_at": "2021-01-29T09:34...
true
793,273,770
1,777
GPT2 MNLI training using run_glue.py
closed
[]
2021-01-25T10:53:52
2021-01-25T11:12:53
2021-01-25T11:12:53
Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets` Running this on Google Colab, ``` !python run_glue.py \ --model_name_or_path gpt2 \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 10 \ --gradient_accu...
nlp-student
https://github.com/huggingface/datasets/issues/1777
null
false
792,755,249
1,776
[Question & Bug Report] Can we preprocess a dataset on the fly?
closed
[]
2021-01-24T09:28:24
2021-05-20T04:15:58
2021-05-20T04:15:58
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
shuaihuaiyi
https://github.com/huggingface/datasets/issues/1776
null
false
792,742,120
1,775
Efficient ways to iterate the dataset
closed
[]
2021-01-24T07:54:31
2021-01-24T09:50:39
2021-01-24T09:50:39
For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks
zhongpeixiang
https://github.com/huggingface/datasets/issues/1775
null
false
792,730,559
1,774
is it possible to make slice to be more compatible like python list and numpy?
closed
[]
2021-01-24T06:15:52
2024-01-31T15:54:18
2024-01-31T15:54:18
Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ```
world2vec
https://github.com/huggingface/datasets/issues/1774
null
false
792,708,160
1,773
bug in loading datasets
closed
[]
2021-01-24T02:53:45
2021-09-06T08:54:46
2021-08-04T18:13:01
Hi, I need to load a dataset, I use these commands: ``` from datasets import load_dataset dataset = load_dataset('csv', data_files={'train': 'sick/train.csv', 'test': 'sick/test.csv', 'validation': 'sick/validation.csv'}) prin...
ghost
https://github.com/huggingface/datasets/issues/1773
null
false
792,703,797
1,772
Adding SICK dataset
closed
[]
2021-01-24T02:15:31
2021-02-05T15:49:25
2021-02-05T15:49:25
Hi It would be great to include SICK dataset. ## Adding a Dataset - **Name:** SICK - **Description:** a well known entailment dataset - **Paper:** http://marcobaroni.org/composes/sick.html - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** this is an important NLI benchmark Instruction...
ghost
https://github.com/huggingface/datasets/issues/1772
null
false
792,701,276
1,771
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
closed
[]
2021-01-24T01:53:52
2021-01-24T23:06:29
2021-01-24T23:06:29
Hi, When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset? ``` Traceback (most recent call last): File "/home/tom/pyenv/pystory/lib/python3.6/site-p...
world2vec
https://github.com/huggingface/datasets/issues/1771
null
false
792,698,148
1,770
how can I combine 2 dataset with different/same features?
closed
[]
2021-01-24T01:26:06
2022-06-01T15:43:15
2022-06-01T15:43:15
to combine 2 dataset by one-one map like ds = zip(ds1, ds2): ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'} or different feature: ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
world2vec
https://github.com/huggingface/datasets/issues/1770
null
false
792,523,284
1,769
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
closed
[]
2021-01-23T10:13:00
2022-10-05T12:38:51
2022-10-05T12:38:51
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine. The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py Script args: ``` --model_name_or_path ../../../model/chine...
shuaihuaiyi
https://github.com/huggingface/datasets/issues/1769
null
false
792,150,745
1,768
Mention kwargs in the Dataset Formatting docs
closed
[]
2021-01-22T16:43:20
2021-01-31T12:33:10
2021-01-25T09:14:59
Hi, This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. To prevent people from having to check the code/method docs, I just added a couple of lines in the docs. Please let me know your thoughts on this. Thanks, Gunjan @lho...
gchhablani
https://github.com/huggingface/datasets/pull/1768
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1768", "html_url": "https://github.com/huggingface/datasets/pull/1768", "diff_url": "https://github.com/huggingface/datasets/pull/1768.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1768.patch", "merged_at": "2021-01-25T09:14...
true
792,068,497
1,767
Add Librispeech ASR
closed
[]
2021-01-22T14:54:37
2021-01-25T20:38:07
2021-01-25T20:37:42
This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360". As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f...
patrickvonplaten
https://github.com/huggingface/datasets/pull/1767
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1767", "html_url": "https://github.com/huggingface/datasets/pull/1767", "diff_url": "https://github.com/huggingface/datasets/pull/1767.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1767.patch", "merged_at": "2021-01-25T20:37...
true
792,044,105
1,766
Issues when run two programs compute the same metrics
closed
[]
2021-01-22T14:22:55
2021-02-02T10:38:06
2021-02-02T10:38:06
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches: ``` File "train_matching_min.py", line 160, in <module>ch...
lamthuy
https://github.com/huggingface/datasets/issues/1766
null
false
791,553,065
1,765
Error iterating over Dataset with DataLoader
closed
[]
2021-01-21T22:56:45
2022-10-28T02:16:38
2021-01-23T03:44:14
I have a Dataset that I've mapped a tokenizer over: ``` encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids']) encoded_dataset[:1] ``` ``` {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2...
EvanZ
https://github.com/huggingface/datasets/issues/1765
null
false
791,486,860
1,764
Connection Issues
closed
[]
2021-01-21T20:56:09
2021-01-21T21:00:19
2021-01-21T21:00:02
Today, I am getting connection issues while loading a dataset and the metric. ``` Traceback (most recent call last): File "src/train.py", line 180, in <module> train_dataset, dev_dataset, test_dataset = create_race_dataset() File "src/train.py", line 130, in create_race_dataset train_dataset = load_da...
SaeedNajafi
https://github.com/huggingface/datasets/issues/1764
null
false
791,389,763
1,763
PAWS-X: Fix csv Dictreader splitting data on quotes
closed
[]
2021-01-21T18:21:01
2021-01-22T10:14:33
2021-01-22T10:13:45
```python from datasets import load_dataset # load english paws-x dataset datasets = load_dataset('paws-x', 'en') print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1] ...
gowtham1997
https://github.com/huggingface/datasets/pull/1763
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1763", "html_url": "https://github.com/huggingface/datasets/pull/1763", "diff_url": "https://github.com/huggingface/datasets/pull/1763.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1763.patch", "merged_at": "2021-01-22T10:13...
true
791,226,007
1,762
Unable to format dataset to CUDA Tensors
closed
[]
2021-01-21T15:31:23
2021-02-02T07:13:22
2021-02-02T07:13:22
Hi, I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors. I tried this, but Dataset doesn't suppor...
gchhablani
https://github.com/huggingface/datasets/issues/1762
null
false
791,150,858
1,761
Add SILICONE benchmark
closed
[]
2021-01-21T14:29:12
2021-02-04T14:32:48
2021-01-26T13:50:31
My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication. This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
eusip
https://github.com/huggingface/datasets/pull/1761
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1761", "html_url": "https://github.com/huggingface/datasets/pull/1761", "diff_url": "https://github.com/huggingface/datasets/pull/1761.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1761.patch", "merged_at": "2021-01-26T13:50...
true
791,110,857
1,760
More tags
closed
[]
2021-01-21T13:50:10
2021-01-22T09:40:01
2021-01-22T09:40:00
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
lhoestq
https://github.com/huggingface/datasets/pull/1760
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1760", "html_url": "https://github.com/huggingface/datasets/pull/1760", "diff_url": "https://github.com/huggingface/datasets/pull/1760.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1760.patch", "merged_at": "2021-01-22T09:40...
true
790,992,226
1,759
wikipedia dataset incomplete
closed
[]
2021-01-21T11:47:15
2021-01-21T17:22:11
2021-01-21T17:21:06
Hey guys, I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset. Unfortunately, I found out that there is an incompleteness for the German dataset. For reasons unknown to me, the number of inhabitants has been removed from many pages: Thorey-sur-Ouche has 128 inhabitants a...
ChrisDelClea
https://github.com/huggingface/datasets/issues/1759
null
false
790,626,116
1,758
dataset.search() (elastic) cannot reliably retrieve search results
closed
[]
2021-01-21T02:26:37
2021-01-22T00:25:50
2021-01-22T00:25:50
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices. The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer. I am indexing data t...
afogarty85
https://github.com/huggingface/datasets/issues/1758
null
false
790,466,509
1,757
FewRel
closed
[]
2021-01-20T23:56:03
2021-03-09T02:52:05
2021-03-08T14:34:52
## Adding a Dataset - **Name:** FewRel - **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset - **Paper:** @inproceedings{han2018fewrel, title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation}, auth...
dspoka
https://github.com/huggingface/datasets/issues/1757
null
false
790,380,028
1,756
Ccaligned multilingual translation dataset
closed
[]
2021-01-20T22:18:44
2021-03-01T10:36:21
2021-03-01T10:36:21
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ...
flozi00
https://github.com/huggingface/datasets/issues/1756
null
false
790,324,734
1,755
Using select/reordering datasets slows operations down immensely
closed
[]
2021-01-20T21:12:12
2021-01-20T22:03:39
2021-01-20T22:03:39
I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour. The below examp...
afogarty85
https://github.com/huggingface/datasets/issues/1755
null
false
789,881,730
1,754
Use a config id in the cache directory names for custom configs
closed
[]
2021-01-20T11:11:00
2021-01-25T09:12:07
2021-01-25T09:12:06
As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config. For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes: ```python from ...
lhoestq
https://github.com/huggingface/datasets/pull/1754
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1754", "html_url": "https://github.com/huggingface/datasets/pull/1754", "diff_url": "https://github.com/huggingface/datasets/pull/1754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1754.patch", "merged_at": "2021-01-25T09:12...
true
789,867,685
1,753
fix comet citations
closed
[]
2021-01-20T10:52:38
2021-01-20T14:39:30
2021-01-20T14:39:30
I realized COMET citations were not showing in the hugging face metrics page: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png"> This pull request is intended to fix that. Thanks!
ricardorei
https://github.com/huggingface/datasets/pull/1753
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1753", "html_url": "https://github.com/huggingface/datasets/pull/1753", "diff_url": "https://github.com/huggingface/datasets/pull/1753.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1753.patch", "merged_at": "2021-01-20T14:39...
true
789,822,459
1,752
COMET metric citation
closed
[]
2021-01-20T09:54:43
2021-01-20T10:27:07
2021-01-20T10:25:02
In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8...
ricardorei
https://github.com/huggingface/datasets/pull/1752
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1752", "html_url": "https://github.com/huggingface/datasets/pull/1752", "diff_url": "https://github.com/huggingface/datasets/pull/1752.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1752.patch", "merged_at": null }
true
789,232,980
1,751
Updated README for the Social Bias Frames dataset
closed
[]
2021-01-19T17:53:00
2021-01-20T14:56:52
2021-01-20T14:56:52
See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.
mcmillanmajora
https://github.com/huggingface/datasets/pull/1751
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1751", "html_url": "https://github.com/huggingface/datasets/pull/1751", "diff_url": "https://github.com/huggingface/datasets/pull/1751.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1751.patch", "merged_at": "2021-01-20T14:56...
true
788,668,085
1,750
Fix typo in README.md of cnn_dailymail
closed
[]
2021-01-19T03:06:05
2021-01-19T11:07:29
2021-01-19T09:48:43
When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`. I am afraid this is a trivial matter, but I would like to make a suggestion for revision.
forest1988
https://github.com/huggingface/datasets/pull/1750
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1750", "html_url": "https://github.com/huggingface/datasets/pull/1750", "diff_url": "https://github.com/huggingface/datasets/pull/1750.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1750.patch", "merged_at": "2021-01-19T09:48...
true
788,476,639
1,749
Added metadata and correct splits for swda.
closed
[]
2021-01-18T18:36:32
2021-01-29T19:35:52
2021-01-29T18:38:08
Switchboard Dialog Act Corpus I made some changes following @bhavitvyamalik recommendation in #1678: * Contains all metadata. * Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo. * Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur...
gmihaila
https://github.com/huggingface/datasets/pull/1749
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1749", "html_url": "https://github.com/huggingface/datasets/pull/1749", "diff_url": "https://github.com/huggingface/datasets/pull/1749.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1749.patch", "merged_at": "2021-01-29T18:38...
true
788,431,642
1,748
add Stuctured Argument Extraction for Korean dataset
closed
[]
2021-01-18T17:14:19
2021-09-17T16:53:18
2021-01-19T11:26:58
stevhliu
https://github.com/huggingface/datasets/pull/1748
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1748", "html_url": "https://github.com/huggingface/datasets/pull/1748", "diff_url": "https://github.com/huggingface/datasets/pull/1748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1748.patch", "merged_at": "2021-01-19T11:26...
true
788,299,775
1,747
datasets slicing with seed
closed
[]
2021-01-18T14:08:55
2022-10-05T12:37:27
2022-10-05T12:37:27
Hi I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html I could not find a seed option, could you assist me please how I can get a slice for different seeds? thank you. @lhoestq
ghost
https://github.com/huggingface/datasets/issues/1747
null
false
788,188,184
1,746
Fix release conda worflow
closed
[]
2021-01-18T11:29:10
2021-01-18T11:31:24
2021-01-18T11:31:23
The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110
lhoestq
https://github.com/huggingface/datasets/pull/1746
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1746", "html_url": "https://github.com/huggingface/datasets/pull/1746", "diff_url": "https://github.com/huggingface/datasets/pull/1746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1746.patch", "merged_at": "2021-01-18T11:31...
true
787,838,256
1,745
difference between wsc and wsc.fixed for superglue
closed
[]
2021-01-18T00:50:19
2021-01-18T11:02:43
2021-01-18T00:59:34
Hi I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq
ghost
https://github.com/huggingface/datasets/issues/1745
null
false
787,649,811
1,744
Add missing "brief" entries to reuters
closed
[]
2021-01-17T07:58:49
2021-01-18T11:26:09
2021-01-18T11:26:09
This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)`
jbragg
https://github.com/huggingface/datasets/pull/1744
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1744", "html_url": "https://github.com/huggingface/datasets/pull/1744", "diff_url": "https://github.com/huggingface/datasets/pull/1744.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1744.patch", "merged_at": "2021-01-18T11:26...
true
787,631,412
1,743
Issue while Creating Custom Metric
closed
[]
2021-01-17T07:01:14
2022-06-01T15:49:34
2022-06-01T15:49:34
Hi Team, I am trying to create a custom metric for my training as follows, where f1 is my own metric: ```python def _info(self): # TODO: Specifies the datasets.MetricInfo object return datasets.MetricInfo( # This is the description that will appear on the metrics page. ...
gchhablani
https://github.com/huggingface/datasets/issues/1743
null
false
787,623,640
1,742
Add GLUE Compat (compatible with transformers<3.5.0)
closed
[]
2021-01-17T05:54:25
2023-09-24T09:52:12
2021-03-29T12:43:30
Link to our discussion on Slack (HF internal) https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400 The next step is to add a compatible option in the new `run_glue.py` I duplicated `glue` and made the following changes: 1. Change the name to `glue_compat`. 2. Change the label assignments for MN...
JetRunner
https://github.com/huggingface/datasets/pull/1742
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1742", "html_url": "https://github.com/huggingface/datasets/pull/1742", "diff_url": "https://github.com/huggingface/datasets/pull/1742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1742.patch", "merged_at": null }
true
787,327,060
1,741
error when run fine_tuning on text_classification
closed
[]
2021-01-16T02:23:19
2021-01-16T02:39:28
2021-01-16T02:39:18
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc...
XiaoYang66
https://github.com/huggingface/datasets/issues/1741
null
false
787,264,605
1,740
add id_liputan6 dataset
closed
[]
2021-01-15T22:58:34
2021-01-20T13:41:26
2021-01-20T13:41:26
id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679
cahya-wirawan
https://github.com/huggingface/datasets/pull/1740
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1740", "html_url": "https://github.com/huggingface/datasets/pull/1740", "diff_url": "https://github.com/huggingface/datasets/pull/1740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1740.patch", "merged_at": "2021-01-20T13:41...
true
787,219,138
1,739
fixes and improvements for the WebNLG loader
closed
[]
2021-01-15T21:45:23
2021-01-29T14:34:06
2021-01-29T10:53:03
- fixes test sets loading in v3.0 - adds additional fields for v3.0_ru - adds info to the WebNLG data card
Shimorina
https://github.com/huggingface/datasets/pull/1739
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1739", "html_url": "https://github.com/huggingface/datasets/pull/1739", "diff_url": "https://github.com/huggingface/datasets/pull/1739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1739.patch", "merged_at": "2021-01-29T10:53...
true
786,068,440
1,738
Conda support
closed
[]
2021-01-14T15:11:25
2021-01-15T10:08:20
2021-01-15T10:08:19
Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`). Will appear here: https://anaconda.org/huggingface/datasets Depends on `conda-forge` for now, so the following is required for installation: ``` conda install -c huggingface -c conda-forge datasets ```
LysandreJik
https://github.com/huggingface/datasets/pull/1738
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1738", "html_url": "https://github.com/huggingface/datasets/pull/1738", "diff_url": "https://github.com/huggingface/datasets/pull/1738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1738.patch", "merged_at": "2021-01-15T10:08...
true