id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
844,603,518
2,145
Implement Dataset add_column
closed
[]
2021-03-30T14:02:14
2021-04-29T14:50:44
2021-04-29T14:50:43
Implement `Dataset.add_column`. Close #1954.
albertvillanova
https://github.com/huggingface/datasets/pull/2145
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2145", "html_url": "https://github.com/huggingface/datasets/pull/2145", "diff_url": "https://github.com/huggingface/datasets/pull/2145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2145.patch", "merged_at": "2021-04-29T14:50...
true
844,352,067
2,144
Loading wikipedia 20200501.en throws pyarrow related error
open
[]
2021-03-30T10:38:31
2021-04-01T09:21:17
null
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
TomPyonsuke
https://github.com/huggingface/datasets/issues/2144
null
false
844,313,228
2,143
task casting via load_dataset
closed
[]
2021-03-30T10:00:42
2021-06-11T13:20:41
2021-06-11T13:20:36
wip not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet".
theo-m
https://github.com/huggingface/datasets/pull/2143
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2143", "html_url": "https://github.com/huggingface/datasets/pull/2143", "diff_url": "https://github.com/huggingface/datasets/pull/2143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2143.patch", "merged_at": null }
true
843,919,420
2,142
Gem V1.1
closed
[]
2021-03-29T23:47:02
2021-03-30T00:10:02
2021-03-30T00:10:02
This branch updates the GEM benchmark to its 1.1 version which includes: - challenge sets for most tasks - detokenized TurkCorpus to match the rest of the text simplification subtasks - fixed inputs for TurkCorpus and ASSET test sets - 18 languages in WikiLingua cc @sebastianGehrmann
yjernite
https://github.com/huggingface/datasets/pull/2142
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2142", "html_url": "https://github.com/huggingface/datasets/pull/2142", "diff_url": "https://github.com/huggingface/datasets/pull/2142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2142.patch", "merged_at": "2021-03-30T00:10...
true
843,914,790
2,141
added spans field for the wikiann datasets
closed
[]
2021-03-29T23:38:26
2021-03-31T13:27:50
2021-03-31T13:27:50
Hi @lhoestq I tried to add spans to the wikiann datasets. Thanks a lot for kindly having a look. This addresses https://github.com/huggingface/datasets/issues/2130. Best regards Rabeeh
rabeehk
https://github.com/huggingface/datasets/pull/2141
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2141", "html_url": "https://github.com/huggingface/datasets/pull/2141", "diff_url": "https://github.com/huggingface/datasets/pull/2141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2141.patch", "merged_at": "2021-03-31T13:27...
true
843,830,451
2,140
add banking77 dataset
closed
[]
2021-03-29T21:32:23
2021-04-09T09:32:18
2021-04-09T09:32:18
Intent classification/detection dataset from banking category with 77 unique intents.
dkajtoch
https://github.com/huggingface/datasets/pull/2140
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2140", "html_url": "https://github.com/huggingface/datasets/pull/2140", "diff_url": "https://github.com/huggingface/datasets/pull/2140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2140.patch", "merged_at": "2021-04-09T09:32...
true
843,662,613
2,139
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
closed
[]
2021-03-29T18:23:54
2021-03-30T09:12:53
2021-03-30T09:12:53
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from dat...
PedroMLF
https://github.com/huggingface/datasets/issues/2139
null
false
843,508,402
2,138
Add CER metric
closed
[]
2021-03-29T15:52:27
2021-04-06T16:16:11
2021-04-06T07:14:38
Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase. ```python from cer import CER cer = CER() class TestCER(unittest.TestCase): def test_cer_case_senstive(self)...
chutaklee
https://github.com/huggingface/datasets/pull/2138
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2138", "html_url": "https://github.com/huggingface/datasets/pull/2138", "diff_url": "https://github.com/huggingface/datasets/pull/2138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2138.patch", "merged_at": "2021-04-06T07:14...
true
843,502,835
2,137
Fix missing infos from concurrent dataset loading
closed
[]
2021-03-29T15:46:12
2021-03-31T10:35:56
2021-03-31T10:35:55
This should fix issue #2131 When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
lhoestq
https://github.com/huggingface/datasets/pull/2137
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2137", "html_url": "https://github.com/huggingface/datasets/pull/2137", "diff_url": "https://github.com/huggingface/datasets/pull/2137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2137.patch", "merged_at": "2021-03-31T10:35...
true
843,492,015
2,136
fix dialogue action slot name and value
closed
[]
2021-03-29T15:34:13
2021-03-31T12:48:02
2021-03-31T12:48:01
fix #2128
adamlin120
https://github.com/huggingface/datasets/pull/2136
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2136", "html_url": "https://github.com/huggingface/datasets/pull/2136", "diff_url": "https://github.com/huggingface/datasets/pull/2136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2136.patch", "merged_at": "2021-03-31T12:48...
true
843,246,344
2,135
en language data from MLQA dataset is missing
closed
[]
2021-03-29T10:47:50
2021-03-30T10:20:23
2021-03-30T10:20:23
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
rabeehk
https://github.com/huggingface/datasets/issues/2135
null
false
843,242,849
2,134
Saving large in-memory datasets with save_to_disk crashes because of pickling
closed
[]
2021-03-29T10:43:15
2021-05-03T17:59:21
2021-05-03T17:59:21
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
prokopCerny
https://github.com/huggingface/datasets/issues/2134
null
false
843,149,680
2,133
bug in mlqa dataset
closed
[]
2021-03-29T09:03:09
2021-03-30T17:40:57
2021-03-30T17:40:57
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0...
dorost1234
https://github.com/huggingface/datasets/issues/2133
null
false
843,142,822
2,132
TydiQA dataset is mixed and is not split per language
open
[]
2021-03-29T08:56:21
2021-04-04T09:57:15
null
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien...
dorost1234
https://github.com/huggingface/datasets/issues/2132
null
false
843,133,112
2,131
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
closed
[]
2021-03-29T08:45:58
2021-04-10T11:08:55
2021-04-10T11:08:55
version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` 71 |   | Traceback (most recent call last): -- | -- | -- 72 |   | File "run_gpt.py"...
andy-yangz
https://github.com/huggingface/datasets/issues/2131
null
false
843,111,936
2,130
wikiann dataset is missing columns
closed
[]
2021-03-29T08:23:00
2021-08-27T14:44:18
2021-08-27T14:44:18
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
dorost1234
https://github.com/huggingface/datasets/issues/2130
null
false
843,033,656
2,129
How to train BERT model with next sentence prediction?
closed
[]
2021-03-29T06:48:03
2021-04-01T04:58:40
2021-04-01T04:58:40
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
jnishi
https://github.com/huggingface/datasets/issues/2129
null
false
843,023,910
2,128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
closed
[]
2021-03-29T06:34:02
2021-03-31T12:48:01
2021-03-31T12:48:01
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p...
adamlin120
https://github.com/huggingface/datasets/issues/2128
null
false
843,017,199
2,127
make documentation more clear to use different cloud storage
closed
[]
2021-03-29T06:24:06
2021-03-29T12:16:24
2021-03-29T12:16:24
This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation.
philschmid
https://github.com/huggingface/datasets/pull/2127
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2127", "html_url": "https://github.com/huggingface/datasets/pull/2127", "diff_url": "https://github.com/huggingface/datasets/pull/2127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2127.patch", "merged_at": "2021-03-29T12:16...
true
842,779,966
2,126
Replace legacy torch.Tensor constructor with torch.tensor
closed
[]
2021-03-28T16:57:30
2021-03-29T09:27:14
2021-03-29T09:27:13
The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo).
mariosasko
https://github.com/huggingface/datasets/pull/2126
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2126", "html_url": "https://github.com/huggingface/datasets/pull/2126", "diff_url": "https://github.com/huggingface/datasets/pull/2126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2126.patch", "merged_at": "2021-03-29T09:27...
true
842,690,570
2,125
Is dataset timit_asr broken?
closed
[]
2021-03-28T08:30:18
2021-03-28T12:29:25
2021-03-28T12:29:25
Using `timit_asr` dataset, I saw all records are the same. ``` python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_example...
kosuke-kitahara
https://github.com/huggingface/datasets/issues/2125
null
false
842,627,729
2,124
Adding ScaNN library to do MIPS?
open
[]
2021-03-28T00:07:00
2021-03-29T13:23:43
null
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. https://github.com/google-research/google-research/tree/master/scann ![image](https://user-images.githubusercontent...
shamanez
https://github.com/huggingface/datasets/issues/2124
null
false
842,577,285
2,123
Problem downloading GEM wiki_auto_asset_turk dataset
closed
[]
2021-03-27T18:41:28
2021-05-12T16:15:18
2021-05-12T16:15:17
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
mille-s
https://github.com/huggingface/datasets/issues/2123
null
false
842,194,588
2,122
Fast table queries with interpolation search
closed
[]
2021-03-26T18:09:20
2021-08-04T18:11:59
2021-04-06T14:33:01
## Intro This should fix issue #1803 Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation. To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default ch...
lhoestq
https://github.com/huggingface/datasets/pull/2122
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2122", "html_url": "https://github.com/huggingface/datasets/pull/2122", "diff_url": "https://github.com/huggingface/datasets/pull/2122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2122.patch", "merged_at": "2021-04-06T14:33...
true
842,148,633
2,121
Add Validation For README
closed
[]
2021-03-26T17:02:17
2021-05-10T13:17:18
2021-05-10T09:41:41
Hi @lhoestq, @yjernite This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each. Let me know if this is going in the right direction :) Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`: ...
gchhablani
https://github.com/huggingface/datasets/pull/2121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2121", "html_url": "https://github.com/huggingface/datasets/pull/2121", "diff_url": "https://github.com/huggingface/datasets/pull/2121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2121.patch", "merged_at": "2021-05-10T09:41...
true
841,954,521
2,120
dataset viewer does not work anymore
closed
[]
2021-03-26T13:22:13
2021-03-26T15:52:22
2021-03-26T15:52:22
Hi I normally use this link to see all datasets and how I can load them https://huggingface.co/datasets/viewer/ Now I am getting 502 Bad Gateway nginx/1.18.0 (Ubuntu) could you bring this webpage back ? this was very helpful @lhoestq thanks for your help
dorost1234
https://github.com/huggingface/datasets/issues/2120
null
false
841,567,199
2,119
copy.deepcopy os.environ instead of copy
closed
[]
2021-03-26T03:58:38
2021-03-26T15:13:52
2021-03-26T15:13:52
Fixes: https://github.com/huggingface/datasets/issues/2115 - bug fix: using envrion.copy() returns a dict. - using deepcopy(environ) returns an `_environ` object - Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, lik...
NihalHarish
https://github.com/huggingface/datasets/pull/2119
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2119", "html_url": "https://github.com/huggingface/datasets/pull/2119", "diff_url": "https://github.com/huggingface/datasets/pull/2119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2119.patch", "merged_at": "2021-03-26T15:13...
true
841,563,329
2,118
Remove os.environ.copy in Dataset.map
closed
[]
2021-03-26T03:48:17
2021-03-26T12:03:23
2021-03-26T12:00:05
Replace `os.environ.copy` with in-place modification Fixes #2115
mariosasko
https://github.com/huggingface/datasets/pull/2118
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2118", "html_url": "https://github.com/huggingface/datasets/pull/2118", "diff_url": "https://github.com/huggingface/datasets/pull/2118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2118.patch", "merged_at": null }
true
841,535,283
2,117
load_metric from local "glue.py" meet error 'NoneType' object is not callable
closed
[]
2021-03-26T02:35:22
2021-08-25T21:44:05
2021-03-26T02:40:26
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent...
Frankie123421
https://github.com/huggingface/datasets/issues/2117
null
false
841,481,292
2,116
Creating custom dataset results in error while calling the map() function
closed
[]
2021-03-26T00:37:46
2021-03-31T14:30:32
2021-03-31T14:30:32
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the ...
GeetDsa
https://github.com/huggingface/datasets/issues/2116
null
false
841,283,974
2,115
The datasets.map() implementation modifies the datatype of os.environ object
closed
[]
2021-03-25T20:29:19
2021-03-26T15:13:52
2021-03-26T15:13:52
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'. This causes following function calls to fail as follows: ` x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) TypeError: get() takes...
leleamol
https://github.com/huggingface/datasets/issues/2115
null
false
841,207,878
2,114
Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR)
closed
[]
2021-03-25T18:40:17
2021-03-31T10:38:50
2021-03-31T10:38:50
Add support for two legal NLP datasets: - EURLEX (https://www.aclweb.org/anthology/P19-1636/) - ECtHR cases (https://arxiv.org/abs/2103.13084) - EU-REG-IR (https://arxiv.org/abs/2101.10726)
iliaschalkidis
https://github.com/huggingface/datasets/pull/2114
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2114", "html_url": "https://github.com/huggingface/datasets/pull/2114", "diff_url": "https://github.com/huggingface/datasets/pull/2114.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2114.patch", "merged_at": "2021-03-31T10:38...
true
841,191,303
2,113
Implement Dataset as context manager
closed
[]
2021-03-25T18:18:30
2021-03-31T11:30:14
2021-03-31T08:30:11
When used as context manager, it would be safely deleted if some exception is raised. This will avoid > During handling of the above exception, another exception occurred:
albertvillanova
https://github.com/huggingface/datasets/pull/2113
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2113", "html_url": "https://github.com/huggingface/datasets/pull/2113", "diff_url": "https://github.com/huggingface/datasets/pull/2113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2113.patch", "merged_at": "2021-03-31T08:30...
true
841,098,008
2,112
Support for legal NLP datasets (EURLEX and ECtHR cases)
closed
[]
2021-03-25T16:24:17
2021-03-25T18:39:31
2021-03-25T18:34:31
Add support for two legal NLP datasets: - EURLEX (https://www.aclweb.org/anthology/P19-1636/) - ECtHR cases (https://arxiv.org/abs/2103.13084)
iliaschalkidis
https://github.com/huggingface/datasets/pull/2112
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2112", "html_url": "https://github.com/huggingface/datasets/pull/2112", "diff_url": "https://github.com/huggingface/datasets/pull/2112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2112.patch", "merged_at": null }
true
841,082,087
2,111
Compute WER metric iteratively
closed
[]
2021-03-25T16:06:48
2021-04-06T07:20:43
2021-04-06T07:20:43
Compute WER metric iteratively to avoid MemoryError. Fix #2078.
albertvillanova
https://github.com/huggingface/datasets/pull/2111
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2111", "html_url": "https://github.com/huggingface/datasets/pull/2111", "diff_url": "https://github.com/huggingface/datasets/pull/2111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2111.patch", "merged_at": "2021-04-06T07:20...
true
840,794,995
2,110
Fix incorrect assertion in builder.py
closed
[]
2021-03-25T10:39:20
2021-04-12T13:33:03
2021-04-12T13:33:03
Fix incorrect num_examples comparison assertion in builder.py
dreamgonfly
https://github.com/huggingface/datasets/pull/2110
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2110", "html_url": "https://github.com/huggingface/datasets/pull/2110", "diff_url": "https://github.com/huggingface/datasets/pull/2110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2110.patch", "merged_at": "2021-04-12T13:33...
true
840,746,598
2,109
Add more issue templates and customize issue template chooser
closed
[]
2021-03-25T09:41:53
2021-04-19T06:20:11
2021-04-19T06:20:11
When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` templa...
albertvillanova
https://github.com/huggingface/datasets/pull/2109
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2109", "html_url": "https://github.com/huggingface/datasets/pull/2109", "diff_url": "https://github.com/huggingface/datasets/pull/2109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2109.patch", "merged_at": "2021-04-19T06:20...
true
840,181,055
2,108
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
open
[]
2021-03-24T21:32:16
2021-03-25T06:31:43
null
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6...
shamanez
https://github.com/huggingface/datasets/issues/2108
null
false
839,495,825
2,107
Metadata validation
closed
[]
2021-03-24T08:52:41
2021-04-26T08:27:14
2021-04-26T08:27:13
- `pydantic` metadata schema with dedicated validators against our taxonomy - ci script to validate new changes against this schema and start a vertuous loop - soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future for reference with the current validation we have ~365...
theo-m
https://github.com/huggingface/datasets/pull/2107
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2107", "html_url": "https://github.com/huggingface/datasets/pull/2107", "diff_url": "https://github.com/huggingface/datasets/pull/2107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2107.patch", "merged_at": "2021-04-26T08:27...
true
839,084,264
2,106
WMT19 Dataset for Kazakh-English is not formatted correctly
open
[]
2021-03-23T20:14:47
2021-03-25T21:36:20
null
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > ...
trina731
https://github.com/huggingface/datasets/issues/2106
null
false
839,059,226
2,105
Request to remove S2ORC dataset
open
[]
2021-03-23T19:43:06
2021-08-04T19:18:02
null
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
kyleclo
https://github.com/huggingface/datasets/issues/2105
null
false
839,027,834
2,104
Trouble loading wiki_movies
closed
[]
2021-03-23T18:59:54
2022-03-30T08:22:58
2022-03-30T08:22:58
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa...
adityaarunsinghal
https://github.com/huggingface/datasets/issues/2104
null
false
838,946,916
2,103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
closed
[]
2021-03-23T17:18:09
2021-04-06T14:39:59
2021-04-06T14:39:59
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {...
samsontmr
https://github.com/huggingface/datasets/issues/2103
null
false
838,794,090
2,102
Move Dataset.to_csv to csv module
closed
[]
2021-03-23T14:35:46
2021-03-24T14:07:35
2021-03-24T14:07:34
Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`.
albertvillanova
https://github.com/huggingface/datasets/pull/2102
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2102", "html_url": "https://github.com/huggingface/datasets/pull/2102", "diff_url": "https://github.com/huggingface/datasets/pull/2102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2102.patch", "merged_at": "2021-03-24T14:07...
true
838,586,184
2,101
MIAM dataset - new citation details
closed
[]
2021-03-23T10:41:23
2021-03-23T18:08:10
2021-03-23T18:08:10
Hi @lhoestq, I have updated the citations to reference an OpenReview preprint.
eusip
https://github.com/huggingface/datasets/pull/2101
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2101", "html_url": "https://github.com/huggingface/datasets/pull/2101", "diff_url": "https://github.com/huggingface/datasets/pull/2101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2101.patch", "merged_at": "2021-03-23T18:08...
true
838,574,631
2,100
Fix deprecated warning message and docstring
closed
[]
2021-03-23T10:27:52
2021-03-24T08:19:41
2021-03-23T18:03:49
Fix deprecated warnings: - Use deprecated Sphinx directive in docstring - Fix format of deprecated message - Raise FutureWarning
albertvillanova
https://github.com/huggingface/datasets/pull/2100
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2100", "html_url": "https://github.com/huggingface/datasets/pull/2100", "diff_url": "https://github.com/huggingface/datasets/pull/2100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2100.patch", "merged_at": "2021-03-23T18:03...
true
838,523,819
2,099
load_from_disk takes a long time to load local dataset
closed
[]
2021-03-23T09:28:37
2021-03-23T17:12:16
2021-03-23T17:12:16
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
samsontmr
https://github.com/huggingface/datasets/issues/2099
null
false
838,447,959
2,098
SQuAD version
closed
[]
2021-03-23T07:47:54
2021-03-26T09:48:54
2021-03-26T09:48:54
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
h-peng17
https://github.com/huggingface/datasets/issues/2098
null
false
838,105,289
2,097
fixes issue #1110 by descending further if `obj["_type"]` is a dict
closed
[]
2021-03-22T21:00:55
2021-03-22T21:01:11
2021-03-22T21:01:11
Check metrics
dcfidalgo
https://github.com/huggingface/datasets/pull/2097
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2097", "html_url": "https://github.com/huggingface/datasets/pull/2097", "diff_url": "https://github.com/huggingface/datasets/pull/2097.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2097.patch", "merged_at": null }
true
838,038,379
2,096
CoNLL 2003 dataset not including German
closed
[]
2021-03-22T19:23:56
2023-07-25T16:49:07
2023-07-25T16:49:07
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it ...
rxian
https://github.com/huggingface/datasets/issues/2096
null
false
837,209,211
2,093
Fix: Allows a feature to be named "_type"
closed
[]
2021-03-21T23:21:57
2021-03-25T14:35:54
2021-03-25T14:35:54
This PR tries to fix issue #1110. Sorry for taking so long to come back to this. It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
dcfidalgo
https://github.com/huggingface/datasets/pull/2093
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2093", "html_url": "https://github.com/huggingface/datasets/pull/2093", "diff_url": "https://github.com/huggingface/datasets/pull/2093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2093.patch", "merged_at": "2021-03-25T14:35...
true
836,984,043
2,092
How to disable making arrow tables in load_dataset ?
closed
[]
2021-03-21T04:50:07
2022-06-01T16:49:52
2022-06-01T16:49:52
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
Jeevesh8
https://github.com/huggingface/datasets/issues/2092
null
false
836,831,403
2,091
Fix copy snippet in docs
closed
[]
2021-03-20T15:08:22
2021-03-24T08:20:50
2021-03-23T17:18:31
With this change the lines starting with `...` in the code blocks can be properly copied to clipboard.
mariosasko
https://github.com/huggingface/datasets/pull/2091
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2091", "html_url": "https://github.com/huggingface/datasets/pull/2091", "diff_url": "https://github.com/huggingface/datasets/pull/2091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2091.patch", "merged_at": "2021-03-23T17:18...
true
836,807,498
2,090
Add machine translated multilingual STS benchmark dataset
closed
[]
2021-03-20T13:28:07
2021-03-29T13:24:42
2021-03-29T13:00:15
also see here https://github.com/PhilipMay/stsb-multi-mt
PhilipMay
https://github.com/huggingface/datasets/pull/2090
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2090", "html_url": "https://github.com/huggingface/datasets/pull/2090", "diff_url": "https://github.com/huggingface/datasets/pull/2090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2090.patch", "merged_at": "2021-03-29T13:00...
true
836,788,019
2,089
Add documentaton for dataset README.md files
closed
[]
2021-03-20T11:44:38
2023-07-25T16:45:38
2023-07-25T16:45:37
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
PhilipMay
https://github.com/huggingface/datasets/issues/2089
null
false
836,763,733
2,088
change bibtex template to author instead of authors
closed
[]
2021-03-20T09:23:44
2021-03-23T15:40:12
2021-03-23T15:40:12
Hi, IMO when using BibTex Author should be used instead of Authors. See here: http://www.bibtex.org/Using/de/ Thanks Philip
PhilipMay
https://github.com/huggingface/datasets/pull/2088
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2088", "html_url": "https://github.com/huggingface/datasets/pull/2088", "diff_url": "https://github.com/huggingface/datasets/pull/2088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2088.patch", "merged_at": "2021-03-23T15:40...
true
836,587,392
2,087
Update metadata if dataset features are modified
closed
[]
2021-03-20T02:05:23
2021-04-09T09:25:33
2021-04-09T09:25:33
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features. Fixes #2083
mariosasko
https://github.com/huggingface/datasets/pull/2087
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2087", "html_url": "https://github.com/huggingface/datasets/pull/2087", "diff_url": "https://github.com/huggingface/datasets/pull/2087.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2087.patch", "merged_at": "2021-04-09T09:25...
true
836,249,587
2,086
change user permissions to -rw-r--r--
closed
[]
2021-03-19T18:14:56
2021-03-24T13:59:04
2021-03-24T13:59:04
Fix for #2065
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2086
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2086", "html_url": "https://github.com/huggingface/datasets/pull/2086", "diff_url": "https://github.com/huggingface/datasets/pull/2086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2086.patch", "merged_at": "2021-03-24T13:59...
true
835,870,994
2,085
Fix max_wait_time in requests
closed
[]
2021-03-19T11:22:26
2021-03-23T15:36:38
2021-03-23T15:36:37
it was handled as a min time, not max cc @SBrandeis
lhoestq
https://github.com/huggingface/datasets/pull/2085
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2085", "html_url": "https://github.com/huggingface/datasets/pull/2085", "diff_url": "https://github.com/huggingface/datasets/pull/2085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2085.patch", "merged_at": "2021-03-23T15:36...
true
835,750,671
2,084
CUAD - Contract Understanding Atticus Dataset
closed
[]
2021-03-19T09:27:43
2021-04-16T08:50:44
2021-04-16T08:50:44
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** http...
theo-m
https://github.com/huggingface/datasets/issues/2084
null
false
835,695,425
2,083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
closed
[]
2021-03-19T08:29:48
2021-04-09T09:25:33
2021-04-09T09:25:33
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou...
patrickvonplaten
https://github.com/huggingface/datasets/issues/2083
null
false
835,401,555
2,082
Updated card using information from data statement and datasheet
closed
[]
2021-03-19T00:39:38
2021-03-19T14:29:09
2021-03-19T14:29:09
I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated...
mcmillanmajora
https://github.com/huggingface/datasets/pull/2082
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2082", "html_url": "https://github.com/huggingface/datasets/pull/2082", "diff_url": "https://github.com/huggingface/datasets/pull/2082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2082.patch", "merged_at": "2021-03-19T14:29...
true
835,112,968
2,081
Fix docstrings issues
closed
[]
2021-03-18T18:11:01
2021-04-07T14:37:43
2021-04-07T14:37:43
Fix docstring issues.
albertvillanova
https://github.com/huggingface/datasets/pull/2081
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2081", "html_url": "https://github.com/huggingface/datasets/pull/2081", "diff_url": "https://github.com/huggingface/datasets/pull/2081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2081.patch", "merged_at": "2021-04-07T14:37...
true
835,023,000
2,080
Multidimensional arrays in a Dataset
closed
[]
2021-03-18T16:29:14
2021-03-25T12:46:53
2021-03-25T12:46:53
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. ...
vermouthmjl
https://github.com/huggingface/datasets/issues/2080
null
false
834,920,493
2,079
Refactorize Metric.compute signature to force keyword arguments only
closed
[]
2021-03-18T15:05:50
2021-03-23T15:31:44
2021-03-23T15:31:44
Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax.
albertvillanova
https://github.com/huggingface/datasets/pull/2079
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2079", "html_url": "https://github.com/huggingface/datasets/pull/2079", "diff_url": "https://github.com/huggingface/datasets/pull/2079.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2079.patch", "merged_at": "2021-03-23T15:31...
true
834,694,819
2,078
MemoryError when computing WER metric
closed
[]
2021-03-18T11:30:05
2021-05-01T08:31:49
2021-04-06T07:20:43
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
diego-fustes
https://github.com/huggingface/datasets/issues/2078
null
false
834,649,536
2,077
Bump huggingface_hub version
closed
[]
2021-03-18T10:54:34
2021-03-18T11:33:26
2021-03-18T11:33:26
`0.0.2 => 0.0.6`
SBrandeis
https://github.com/huggingface/datasets/pull/2077
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2077", "html_url": "https://github.com/huggingface/datasets/pull/2077", "diff_url": "https://github.com/huggingface/datasets/pull/2077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2077.patch", "merged_at": "2021-03-18T11:33...
true
834,445,296
2,076
Issue: Dataset download error
open
[]
2021-03-18T06:36:06
2021-03-22T11:52:31
null
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
XuhuiZhou
https://github.com/huggingface/datasets/issues/2076
null
false
834,301,246
2,075
ConnectionError: Couldn't reach common_voice.py
closed
[]
2021-03-18T01:19:06
2021-03-20T10:29:41
2021-03-20T10:29:41
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma...
LifaSun
https://github.com/huggingface/datasets/issues/2075
null
false
834,268,463
2,074
Fix size categories in YAML Tags
closed
[]
2021-03-18T00:02:36
2021-03-23T17:11:10
2021-03-23T17:11:10
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also. This PR also adds a couple of infos that I found missing. The code for generating this: ```python for datas...
gchhablani
https://github.com/huggingface/datasets/pull/2074
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2074", "html_url": "https://github.com/huggingface/datasets/pull/2074", "diff_url": "https://github.com/huggingface/datasets/pull/2074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2074.patch", "merged_at": "2021-03-23T17:11...
true
834,192,501
2,073
Fixes check of TF_AVAILABLE and TORCH_AVAILABLE
closed
[]
2021-03-17T21:28:53
2021-03-18T09:09:25
2021-03-18T09:09:24
# What is this PR doing This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068
philschmid
https://github.com/huggingface/datasets/pull/2073
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2073", "html_url": "https://github.com/huggingface/datasets/pull/2073", "diff_url": "https://github.com/huggingface/datasets/pull/2073.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2073.patch", "merged_at": "2021-03-18T09:09...
true
834,054,837
2,072
Fix docstring issues
closed
[]
2021-03-17T18:13:44
2021-03-24T08:20:57
2021-03-18T12:41:21
Fix docstring issues.
albertvillanova
https://github.com/huggingface/datasets/pull/2072
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2072", "html_url": "https://github.com/huggingface/datasets/pull/2072", "diff_url": "https://github.com/huggingface/datasets/pull/2072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2072.patch", "merged_at": "2021-03-18T12:41...
true
833,950,824
2,071
Multiprocessing is slower than single process
closed
[]
2021-03-17T16:08:58
2021-03-18T09:10:23
2021-03-18T09:10:23
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: ...
theo-m
https://github.com/huggingface/datasets/issues/2071
null
false
833,799,035
2,070
ArrowInvalid issue for squad v2 dataset
closed
[]
2021-03-17T13:51:49
2021-08-04T17:57:16
2021-08-04T17:57:16
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co...
MichaelYxWang
https://github.com/huggingface/datasets/issues/2070
null
false
833,768,926
2,069
Add and fix docstring for NamedSplit
closed
[]
2021-03-17T13:19:28
2021-03-18T10:27:40
2021-03-18T10:27:40
Add and fix docstring for `NamedSplit`, which was missing.
albertvillanova
https://github.com/huggingface/datasets/pull/2069
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2069", "html_url": "https://github.com/huggingface/datasets/pull/2069", "diff_url": "https://github.com/huggingface/datasets/pull/2069.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2069.patch", "merged_at": "2021-03-18T10:27...
true
833,602,832
2,068
PyTorch not available error on SageMaker GPU docker though it is installed
closed
[]
2021-03-17T10:04:27
2021-06-14T04:47:30
2021-06-14T04:47:30
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
sivakhno
https://github.com/huggingface/datasets/issues/2068
null
false
833,559,940
2,067
Multiprocessing windows error
closed
[]
2021-03-17T09:12:28
2021-08-04T17:59:08
2021-08-04T17:59:08
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
flozi00
https://github.com/huggingface/datasets/issues/2067
null
false
833,480,551
2,066
Fix docstring rendering of Dataset/DatasetDict.from_csv args
closed
[]
2021-03-17T07:23:10
2021-03-17T09:21:21
2021-03-17T09:21:21
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
albertvillanova
https://github.com/huggingface/datasets/pull/2066
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2066", "html_url": "https://github.com/huggingface/datasets/pull/2066", "diff_url": "https://github.com/huggingface/datasets/pull/2066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2066.patch", "merged_at": "2021-03-17T09:21...
true
833,291,432
2,065
Only user permission of saved cache files, not group
closed
[]
2021-03-17T00:20:22
2023-03-31T12:17:06
2021-05-10T06:45:29
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
lorr1
https://github.com/huggingface/datasets/issues/2065
null
false
833,002,360
2,064
Fix ted_talks_iwslt version error
closed
[]
2021-03-16T16:43:45
2021-03-16T18:00:08
2021-03-16T18:00:08
This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly. Fixes #2059
mariosasko
https://github.com/huggingface/datasets/pull/2064
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2064", "html_url": "https://github.com/huggingface/datasets/pull/2064", "diff_url": "https://github.com/huggingface/datasets/pull/2064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2064.patch", "merged_at": "2021-03-16T18:00...
true
832,993,705
2,063
[Common Voice] Adapt dataset script so that no manual data download is actually needed
closed
[]
2021-03-16T16:33:44
2021-03-17T09:42:52
2021-03-17T09:42:37
This PR changes the dataset script so that no manual data dir is needed anymore.
patrickvonplaten
https://github.com/huggingface/datasets/pull/2063
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2063", "html_url": "https://github.com/huggingface/datasets/pull/2063", "diff_url": "https://github.com/huggingface/datasets/pull/2063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2063.patch", "merged_at": "2021-03-17T09:42...
true
832,625,483
2,062
docs: fix missing quotation
closed
[]
2021-03-16T10:07:54
2021-03-17T09:21:57
2021-03-17T09:21:57
The json code misses a quote
neal2018
https://github.com/huggingface/datasets/pull/2062
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2062", "html_url": "https://github.com/huggingface/datasets/pull/2062", "diff_url": "https://github.com/huggingface/datasets/pull/2062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2062.patch", "merged_at": "2021-03-17T09:21...
true
832,596,228
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
closed
[]
2021-03-16T09:32:13
2021-06-18T11:54:11
2021-06-18T11:54:10
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
adzcodez
https://github.com/huggingface/datasets/issues/2061
null
false
832,588,591
2,060
Filtering refactor
closed
[]
2021-03-16T09:23:30
2023-09-24T09:52:57
2021-10-13T09:09:03
fix https://github.com/huggingface/datasets/issues/2032 benchmarking is somewhat inconclusive, currently running on `book_corpus` with: ```python bc = load_dataset("bookcorpus") now = time.time() bc.filter(lambda x: len(x["text"]) < 64) elapsed = time.time() - now print(elapsed) ``` t...
theo-m
https://github.com/huggingface/datasets/pull/2060
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2060", "html_url": "https://github.com/huggingface/datasets/pull/2060", "diff_url": "https://github.com/huggingface/datasets/pull/2060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2060.patch", "merged_at": null }
true
832,579,156
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
closed
[]
2021-03-16T09:12:19
2021-03-16T18:00:31
2021-03-16T18:00:07
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error ...
ekdnam
https://github.com/huggingface/datasets/issues/2059
null
false
832,159,844
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
closed
[]
2021-03-15T20:18:47
2023-07-25T16:47:40
2023-07-25T16:47:40
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` ...
abarbosa94
https://github.com/huggingface/datasets/issues/2058
null
false
832,120,522
2,057
update link to ZEST dataset
closed
[]
2021-03-15T19:22:57
2021-03-16T17:06:28
2021-03-16T17:06:28
Updating the link as the original one is no longer working.
matt-peters
https://github.com/huggingface/datasets/pull/2057
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2057", "html_url": "https://github.com/huggingface/datasets/pull/2057", "diff_url": "https://github.com/huggingface/datasets/pull/2057.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2057.patch", "merged_at": "2021-03-16T17:06...
true
831,718,397
2,056
issue with opus100/en-fr dataset
closed
[]
2021-03-15T11:32:42
2021-03-16T15:49:00
2021-03-16T15:48:59
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
dorost1234
https://github.com/huggingface/datasets/issues/2056
null
false
831,684,312
2,055
is there a way to override a dataset object saved with save_to_disk?
closed
[]
2021-03-15T10:50:53
2021-03-22T04:06:17
2021-03-22T04:06:17
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
shamanez
https://github.com/huggingface/datasets/issues/2055
null
false
831,597,665
2,054
Could not find file for ZEST dataset
closed
[]
2021-03-15T09:11:58
2021-05-03T09:30:24
2021-05-03T09:30:24
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: ...
bhadreshpsavani
https://github.com/huggingface/datasets/issues/2054
null
false
831,151,728
2,053
Add bAbI QA tasks
closed
[]
2021-03-14T13:04:39
2021-03-29T12:41:48
2021-03-29T12:41:48
- **Name:** *The (20) QA bAbI tasks* - **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many mor...
gchhablani
https://github.com/huggingface/datasets/pull/2053
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2053", "html_url": "https://github.com/huggingface/datasets/pull/2053", "diff_url": "https://github.com/huggingface/datasets/pull/2053.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2053.patch", "merged_at": "2021-03-29T12:41...
true
831,135,704
2,052
Timit_asr dataset repeats examples
closed
[]
2021-03-14T11:43:43
2021-03-15T10:37:16
2021-03-15T10:37:16
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text']...
fermaat
https://github.com/huggingface/datasets/issues/2052
null
false
831,027,021
2,051
Add MDD Dataset
closed
[]
2021-03-14T00:01:05
2021-03-19T11:15:44
2021-03-19T10:31:59
- **Name:** *MDD Dataset* - **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb. ...
gchhablani
https://github.com/huggingface/datasets/pull/2051
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2051", "html_url": "https://github.com/huggingface/datasets/pull/2051", "diff_url": "https://github.com/huggingface/datasets/pull/2051.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2051.patch", "merged_at": "2021-03-19T10:31...
true
831,006,551
2,050
Build custom dataset to fine-tune Wav2Vec2
closed
[]
2021-03-13T22:01:10
2021-03-15T09:27:28
2021-03-15T09:27:28
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
Omarnabk
https://github.com/huggingface/datasets/issues/2050
null
false
830,978,687
2,049
Fix text-classification tags
closed
[]
2021-03-13T19:51:42
2021-03-16T15:47:46
2021-03-16T15:47:46
There are different tags for text classification right now: `text-classification` and `text_classification`: ![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png). This PR fixes it.
gchhablani
https://github.com/huggingface/datasets/pull/2049
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2049", "html_url": "https://github.com/huggingface/datasets/pull/2049", "diff_url": "https://github.com/huggingface/datasets/pull/2049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2049.patch", "merged_at": "2021-03-16T15:47...
true
830,953,431
2,048
github is not always available - probably need a back up
closed
[]
2021-03-13T18:03:32
2022-04-01T15:27:10
2022-04-01T15:27:10
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubuser...
stas00
https://github.com/huggingface/datasets/issues/2048
null
false
830,626,430
2,047
Multilingual dIalogAct benchMark (miam)
closed
[]
2021-03-12T23:02:55
2021-03-23T10:36:34
2021-03-19T10:47:13
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
eusip
https://github.com/huggingface/datasets/pull/2047
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047", "html_url": "https://github.com/huggingface/datasets/pull/2047", "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "merged_at": "2021-03-19T10:47...
true
830,423,033
2,046
add_faisis_index gets very slow when doing it interatively
closed
[]
2021-03-12T20:27:18
2021-03-24T22:29:11
2021-03-24T22:29:11
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
shamanez
https://github.com/huggingface/datasets/issues/2046
null
false
830,351,527
2,045
Preserve column ordering in Dataset.rename_column
closed
[]
2021-03-12T18:26:47
2021-03-16T14:48:05
2021-03-16T14:35:05
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns: ```python >>> from datasets import Dataset >>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]}) >>> d Dataset({ features: ['sentences', 'label'], num_rows: 2 }) >>> d.rename_column('sentences', '...
mariosasko
https://github.com/huggingface/datasets/pull/2045
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2045", "html_url": "https://github.com/huggingface/datasets/pull/2045", "diff_url": "https://github.com/huggingface/datasets/pull/2045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2045.patch", "merged_at": "2021-03-16T14:35...
true
830,339,905
2,044
Add CBT dataset
closed
[]
2021-03-12T18:04:19
2021-03-19T11:10:13
2021-03-19T10:29:15
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301). Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags. The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines,...
gchhablani
https://github.com/huggingface/datasets/pull/2044
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2044", "html_url": "https://github.com/huggingface/datasets/pull/2044", "diff_url": "https://github.com/huggingface/datasets/pull/2044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2044.patch", "merged_at": "2021-03-19T10:29...
true