id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 β | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k β | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
833,799,035 | https://api.github.com/repos/huggingface/datasets/issues/2070 | https://github.com/huggingface/datasets/issues/2070 | 2,070 | ArrowInvalid issue for squad v2 dataset | closed | 1 | 2021-03-17T13:51:49 | 2021-08-04T17:57:16 | 2021-08-04T17:57:16 | MichaelYxWang | [] | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co... | false |
833,768,926 | https://api.github.com/repos/huggingface/datasets/issues/2069 | https://github.com/huggingface/datasets/pull/2069 | 2,069 | Add and fix docstring for NamedSplit | closed | 1 | 2021-03-17T13:19:28 | 2021-03-18T10:27:40 | 2021-03-18T10:27:40 | albertvillanova | [] | Add and fix docstring for `NamedSplit`, which was missing. | true |
833,602,832 | https://api.github.com/repos/huggingface/datasets/issues/2068 | https://github.com/huggingface/datasets/issues/2068 | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | closed | 7 | 2021-03-17T10:04:27 | 2021-06-14T04:47:30 | 2021-06-14T04:47:30 | sivakhno | [] | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*a... | false |
833,559,940 | https://api.github.com/repos/huggingface/datasets/issues/2067 | https://github.com/huggingface/datasets/issues/2067 | 2,067 | Multiprocessing windows error | closed | 10 | 2021-03-17T09:12:28 | 2021-08-04T17:59:08 | 2021-08-04T17:59:08 | flozi00 | [] | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log c... | false |
833,480,551 | https://api.github.com/repos/huggingface/datasets/issues/2066 | https://github.com/huggingface/datasets/pull/2066 | 2,066 | Fix docstring rendering of Dataset/DatasetDict.from_csv args | closed | 0 | 2021-03-17T07:23:10 | 2021-03-17T09:21:21 | 2021-03-17T09:21:21 | albertvillanova | [] | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | true |
833,291,432 | https://api.github.com/repos/huggingface/datasets/issues/2065 | https://github.com/huggingface/datasets/issues/2065 | 2,065 | Only user permission of saved cache files, not group | closed | 26 | 2021-03-17T00:20:22 | 2023-03-31T12:17:06 | 2021-05-10T06:45:29 | lorr1 | [
"enhancement",
"good first issue"
] | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno... | false |
833,002,360 | https://api.github.com/repos/huggingface/datasets/issues/2064 | https://github.com/huggingface/datasets/pull/2064 | 2,064 | Fix ted_talks_iwslt version error | closed | 0 | 2021-03-16T16:43:45 | 2021-03-16T18:00:08 | 2021-03-16T18:00:08 | mariosasko | [] | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | true |
832,993,705 | https://api.github.com/repos/huggingface/datasets/issues/2063 | https://github.com/huggingface/datasets/pull/2063 | 2,063 | [Common Voice] Adapt dataset script so that no manual data download is actually needed | closed | 0 | 2021-03-16T16:33:44 | 2021-03-17T09:42:52 | 2021-03-17T09:42:37 | patrickvonplaten | [] | This PR changes the dataset script so that no manual data dir is needed anymore. | true |
832,625,483 | https://api.github.com/repos/huggingface/datasets/issues/2062 | https://github.com/huggingface/datasets/pull/2062 | 2,062 | docs: fix missing quotation | closed | 0 | 2021-03-16T10:07:54 | 2021-03-17T09:21:57 | 2021-03-17T09:21:57 | neal2018 | [] | The json code misses a quote | true |
832,596,228 | https://api.github.com/repos/huggingface/datasets/issues/2061 | https://github.com/huggingface/datasets/issues/2061 | 2,061 | Cannot load udpos subsets from xtreme dataset using load_dataset() | closed | 6 | 2021-03-16T09:32:13 | 2021-06-18T11:54:11 | 2021-06-18T11:54:10 | adzcodez | [
"good first issue"
] | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ... | false |
832,588,591 | https://api.github.com/repos/huggingface/datasets/issues/2060 | https://github.com/huggingface/datasets/pull/2060 | 2,060 | Filtering refactor | closed | 10 | 2021-03-16T09:23:30 | 2023-09-24T09:52:57 | 2021-10-13T09:09:03 | theo-m | [] | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
t... | true |
832,579,156 | https://api.github.com/repos/huggingface/datasets/issues/2059 | https://github.com/huggingface/datasets/issues/2059 | 2,059 | Error while following docs to load the `ted_talks_iwslt` dataset | closed | 2 | 2021-03-16T09:12:19 | 2021-03-16T18:00:31 | 2021-03-16T18:00:07 | ekdnam | [
"dataset bug"
] | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error ... | false |
832,159,844 | https://api.github.com/repos/huggingface/datasets/issues/2058 | https://github.com/huggingface/datasets/issues/2058 | 2,058 | Is it possible to convert a `tfds` to HuggingFace `dataset`? | closed | 1 | 2021-03-15T20:18:47 | 2023-07-25T16:47:40 | 2023-07-25T16:47:40 | abarbosa94 | [] | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` ... | false |
832,120,522 | https://api.github.com/repos/huggingface/datasets/issues/2057 | https://github.com/huggingface/datasets/pull/2057 | 2,057 | update link to ZEST dataset | closed | 0 | 2021-03-15T19:22:57 | 2021-03-16T17:06:28 | 2021-03-16T17:06:28 | matt-peters | [] | Updating the link as the original one is no longer working. | true |
831,718,397 | https://api.github.com/repos/huggingface/datasets/issues/2056 | https://github.com/huggingface/datasets/issues/2056 | 2,056 | issue with opus100/en-fr dataset | closed | 3 | 2021-03-15T11:32:42 | 2021-03-16T15:49:00 | 2021-03-16T15:48:59 | dorost1234 | [] | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked... | false |
831,684,312 | https://api.github.com/repos/huggingface/datasets/issues/2055 | https://github.com/huggingface/datasets/issues/2055 | 2,055 | is there a way to override a dataset object saved with save_to_disk? | closed | 4 | 2021-03-15T10:50:53 | 2021-03-22T04:06:17 | 2021-03-22T04:06:17 | shamanez | [] | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | false |
831,597,665 | https://api.github.com/repos/huggingface/datasets/issues/2054 | https://github.com/huggingface/datasets/issues/2054 | 2,054 | Could not find file for ZEST dataset | closed | 4 | 2021-03-15T09:11:58 | 2021-05-03T09:30:24 | 2021-05-03T09:30:24 | bhadreshpsavani | [
"dataset bug"
] | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: ... | false |
831,151,728 | https://api.github.com/repos/huggingface/datasets/issues/2053 | https://github.com/huggingface/datasets/pull/2053 | 2,053 | Add bAbI QA tasks | closed | 7 | 2021-03-14T13:04:39 | 2021-03-29T12:41:48 | 2021-03-29T12:41:48 | gchhablani | [] | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many mor... | true |
831,135,704 | https://api.github.com/repos/huggingface/datasets/issues/2052 | https://github.com/huggingface/datasets/issues/2052 | 2,052 | Timit_asr dataset repeats examples | closed | 2 | 2021-03-14T11:43:43 | 2021-03-15T10:37:16 | 2021-03-15T10:37:16 | fermaat | [] | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']... | false |
831,027,021 | https://api.github.com/repos/huggingface/datasets/issues/2051 | https://github.com/huggingface/datasets/pull/2051 | 2,051 | Add MDD Dataset | closed | 2 | 2021-03-14T00:01:05 | 2021-03-19T11:15:44 | 2021-03-19T10:31:59 | gchhablani | [] | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
... | true |
831,006,551 | https://api.github.com/repos/huggingface/datasets/issues/2050 | https://github.com/huggingface/datasets/issues/2050 | 2,050 | Build custom dataset to fine-tune Wav2Vec2 | closed | 3 | 2021-03-13T22:01:10 | 2021-03-15T09:27:28 | 2021-03-15T09:27:28 | Omarnabk | [
"dataset request"
] | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| false |
830,978,687 | https://api.github.com/repos/huggingface/datasets/issues/2049 | https://github.com/huggingface/datasets/pull/2049 | 2,049 | Fix text-classification tags | closed | 1 | 2021-03-13T19:51:42 | 2021-03-16T15:47:46 | 2021-03-16T15:47:46 | gchhablani | [] | There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
| true |
830,953,431 | https://api.github.com/repos/huggingface/datasets/issues/2048 | https://github.com/huggingface/datasets/issues/2048 | 2,048 | github is not always available - probably need a back up | closed | 0 | 2021-03-13T18:03:32 | 2022-04-01T15:27:10 | 2022-04-01T15:27:10 | stas00 | [] | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubuser... | false |
830,626,430 | https://api.github.com/repos/huggingface/datasets/issues/2047 | https://github.com/huggingface/datasets/pull/2047 | 2,047 | Multilingual dIalogAct benchMark (miam) | closed | 4 | 2021-03-12T23:02:55 | 2021-03-23T10:36:34 | 2021-03-19T10:47:13 | eusip | [] | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | true |
830,423,033 | https://api.github.com/repos/huggingface/datasets/issues/2046 | https://github.com/huggingface/datasets/issues/2046 | 2,046 | add_faisis_index gets very slow when doing it interatively | closed | 11 | 2021-03-12T20:27:18 | 2021-03-24T22:29:11 | 2021-03-24T22:29:11 | shamanez | [] | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | false |
830,351,527 | https://api.github.com/repos/huggingface/datasets/issues/2045 | https://github.com/huggingface/datasets/pull/2045 | 2,045 | Preserve column ordering in Dataset.rename_column | closed | 2 | 2021-03-12T18:26:47 | 2021-03-16T14:48:05 | 2021-03-16T14:35:05 | mariosasko | [] | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', '... | true |
830,339,905 | https://api.github.com/repos/huggingface/datasets/issues/2044 | https://github.com/huggingface/datasets/pull/2044 | 2,044 | Add CBT dataset | closed | 2 | 2021-03-12T18:04:19 | 2021-03-19T11:10:13 | 2021-03-19T10:29:15 | gchhablani | [] | This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines,... | true |
830,279,098 | https://api.github.com/repos/huggingface/datasets/issues/2043 | https://github.com/huggingface/datasets/pull/2043 | 2,043 | Support pickle protocol for dataset splits defined as ReadInstruction | closed | 2 | 2021-03-12T16:35:11 | 2021-03-16T14:25:38 | 2021-03-16T14:05:05 | mariosasko | [] | Fixes #2022 (+ some style fixes) | true |
830,190,276 | https://api.github.com/repos/huggingface/datasets/issues/2042 | https://github.com/huggingface/datasets/pull/2042 | 2,042 | Fix arrow memory checks issue in tests | closed | 0 | 2021-03-12T14:49:52 | 2021-03-12T15:04:23 | 2021-03-12T15:04:22 | lhoestq | [] | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing th... | true |
830,180,803 | https://api.github.com/repos/huggingface/datasets/issues/2041 | https://github.com/huggingface/datasets/pull/2041 | 2,041 | Doc2dial update data_infos and data_loaders | closed | 0 | 2021-03-12T14:39:29 | 2021-03-16T11:09:20 | 2021-03-16T11:09:20 | songfeng | [] | true | |
830,169,387 | https://api.github.com/repos/huggingface/datasets/issues/2040 | https://github.com/huggingface/datasets/issues/2040 | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | closed | 4 | 2021-03-12T14:27:00 | 2021-08-04T18:00:43 | 2021-08-04T18:00:43 | simonschoe | [] | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | false |
830,047,652 | https://api.github.com/repos/huggingface/datasets/issues/2039 | https://github.com/huggingface/datasets/pull/2039 | 2,039 | Doc2dial rc | closed | 0 | 2021-03-12T11:56:28 | 2021-03-12T15:32:36 | 2021-03-12T15:32:36 | songfeng | [] | Added fix to handle the last turn that is a user turn. | true |
830,036,875 | https://api.github.com/repos/huggingface/datasets/issues/2038 | https://github.com/huggingface/datasets/issues/2038 | 2,038 | outdated dataset_infos.json might fail verifications | closed | 2 | 2021-03-12T11:41:54 | 2021-03-16T16:27:40 | 2021-03-16T16:27:40 | songfeng | [] | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | false |
829,919,685 | https://api.github.com/repos/huggingface/datasets/issues/2037 | https://github.com/huggingface/datasets/pull/2037 | 2,037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | closed | 1 | 2021-03-12T09:22:00 | 2021-03-23T06:08:16 | 2021-03-16T11:01:22 | miyamonz | [] | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related... | true |
829,909,258 | https://api.github.com/repos/huggingface/datasets/issues/2036 | https://github.com/huggingface/datasets/issues/2036 | 2,036 | Cannot load wikitext | closed | 1 | 2021-03-12T09:09:39 | 2021-03-15T08:45:02 | 2021-03-15T08:44:44 | Gpwner | [] | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.p... | false |
829,475,544 | https://api.github.com/repos/huggingface/datasets/issues/2035 | https://github.com/huggingface/datasets/issues/2035 | 2,035 | wiki40b/wikipedia for almost all languages cannot be downloaded | closed | 11 | 2021-03-11T19:54:54 | 2024-03-15T16:09:49 | 2024-03-15T16:09:48 | dorost1234 | [] | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | false |
829,381,388 | https://api.github.com/repos/huggingface/datasets/issues/2034 | https://github.com/huggingface/datasets/pull/2034 | 2,034 | Fix typo | closed | 0 | 2021-03-11T17:46:13 | 2021-03-11T18:06:25 | 2021-03-11T18:06:25 | pcyin | [] | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | true |
829,295,339 | https://api.github.com/repos/huggingface/datasets/issues/2033 | https://github.com/huggingface/datasets/pull/2033 | 2,033 | Raise an error for outdated sacrebleu versions | closed | 0 | 2021-03-11T16:08:00 | 2021-03-11T17:58:12 | 2021-03-11T17:58:12 | lhoestq | [] | The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12
For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py):
```python
def _compute(
self,
predictions,
references,
smooth_method="exp",
smooth_value=None,
force... | true |
829,250,912 | https://api.github.com/repos/huggingface/datasets/issues/2032 | https://github.com/huggingface/datasets/issues/2032 | 2,032 | Use Arrow filtering instead of writing a new arrow file for Dataset.filter | closed | 1 | 2021-03-11T15:18:50 | 2024-01-19T13:26:32 | 2024-01-19T13:26:32 | lhoestq | [
"enhancement"
] | Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- i... | false |
829,122,778 | https://api.github.com/repos/huggingface/datasets/issues/2031 | https://github.com/huggingface/datasets/issues/2031 | 2,031 | wikipedia.py generator that extracts XML doesn't release memory | closed | 2 | 2021-03-11T12:51:24 | 2021-03-22T08:33:52 | 2021-03-22T08:33:52 | miyamonz | [] | I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikip... | false |
829,110,803 | https://api.github.com/repos/huggingface/datasets/issues/2030 | https://github.com/huggingface/datasets/pull/2030 | 2,030 | Implement Dataset from text | closed | 1 | 2021-03-11T12:34:50 | 2021-03-18T13:29:29 | 2021-03-18T13:29:29 | albertvillanova | [] | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | true |
829,097,290 | https://api.github.com/repos/huggingface/datasets/issues/2029 | https://github.com/huggingface/datasets/issues/2029 | 2,029 | Loading a faiss index KeyError | closed | 4 | 2021-03-11T12:16:13 | 2021-03-12T00:21:09 | 2021-03-12T00:21:09 | nbroad1881 | [
"documentation"
] | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | false |
828,721,393 | https://api.github.com/repos/huggingface/datasets/issues/2028 | https://github.com/huggingface/datasets/pull/2028 | 2,028 | Adding PersiNLU reading-comprehension | closed | 3 | 2021-03-11T04:41:13 | 2021-03-15T09:39:57 | 2021-03-15T09:39:57 | danyaljj | [] | true | |
828,490,444 | https://api.github.com/repos/huggingface/datasets/issues/2027 | https://github.com/huggingface/datasets/pull/2027 | 2,027 | Update format columns in Dataset.rename_columns | closed | 0 | 2021-03-10T23:50:59 | 2021-03-11T14:38:40 | 2021-03-11T14:38:40 | mariosasko | [] | Fixes #2026 | true |
828,194,467 | https://api.github.com/repos/huggingface/datasets/issues/2026 | https://github.com/huggingface/datasets/issues/2026 | 2,026 | KeyError on using map after renaming a column | closed | 3 | 2021-03-10T18:54:17 | 2021-03-11T14:39:34 | 2021-03-11T14:38:40 | gchhablani | [] | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),... | false |
828,047,476 | https://api.github.com/repos/huggingface/datasets/issues/2025 | https://github.com/huggingface/datasets/pull/2025 | 2,025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | closed | 16 | 2021-03-10T17:00:47 | 2021-03-30T14:46:53 | 2021-03-26T16:51:59 | lhoestq | [] | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pick... | true |
827,842,962 | https://api.github.com/repos/huggingface/datasets/issues/2024 | https://github.com/huggingface/datasets/pull/2024 | 2,024 | Remove print statement from mnist.py | closed | 1 | 2021-03-10T14:39:58 | 2021-03-11T18:03:52 | 2021-03-11T18:03:51 | gchhablani | [] | true | |
827,819,608 | https://api.github.com/repos/huggingface/datasets/issues/2023 | https://github.com/huggingface/datasets/pull/2023 | 2,023 | Add Romanian to XQuAD | closed | 4 | 2021-03-10T14:24:32 | 2021-03-15T10:08:17 | 2021-03-15T10:08:17 | M-Salti | [] | On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| true |
827,435,033 | https://api.github.com/repos/huggingface/datasets/issues/2022 | https://github.com/huggingface/datasets/issues/2022 | 2,022 | ValueError when rename_column on splitted dataset | closed | 2 | 2021-03-10T09:40:38 | 2025-02-05T13:36:07 | 2021-03-16T14:05:05 | simonschoe | [] | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_datase... | false |
826,988,016 | https://api.github.com/repos/huggingface/datasets/issues/2021 | https://github.com/huggingface/datasets/issues/2021 | 2,021 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | closed | 1 | 2021-03-10T02:48:34 | 2021-03-13T10:07:41 | 2021-03-13T10:07:41 | shamanez | [] | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a seri... | false |
826,961,126 | https://api.github.com/repos/huggingface/datasets/issues/2020 | https://github.com/huggingface/datasets/pull/2020 | 2,020 | Remove unnecessary docstart check in conll-like datasets | closed | 0 | 2021-03-10T02:20:16 | 2021-03-11T13:33:37 | 2021-03-11T13:33:37 | mariosasko | [] | Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
| true |
826,625,706 | https://api.github.com/repos/huggingface/datasets/issues/2019 | https://github.com/huggingface/datasets/pull/2019 | 2,019 | Replace print with logging in dataset scripts | closed | 2 | 2021-03-09T20:59:34 | 2021-03-12T10:09:01 | 2021-03-11T16:14:19 | mariosasko | [] | Replaces `print(...)` in the dataset scripts with the library logger. | true |
826,473,764 | https://api.github.com/repos/huggingface/datasets/issues/2018 | https://github.com/huggingface/datasets/pull/2018 | 2,018 | Md gender card update | closed | 3 | 2021-03-09T18:57:20 | 2021-03-12T17:31:00 | 2021-03-12T17:31:00 | mcmillanmajora | [] | I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I... | true |
826,428,578 | https://api.github.com/repos/huggingface/datasets/issues/2017 | https://github.com/huggingface/datasets/pull/2017 | 2,017 | Add TF-based Features to handle different modes of data | closed | 0 | 2021-03-09T18:29:52 | 2021-03-17T12:32:08 | 2021-03-17T12:32:07 | gchhablani | [] | Hi,
I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress. | true |
825,965,493 | https://api.github.com/repos/huggingface/datasets/issues/2016 | https://github.com/huggingface/datasets/pull/2016 | 2,016 | Not all languages have 2 digit codes. | closed | 0 | 2021-03-09T13:53:39 | 2021-03-11T18:01:03 | 2021-03-11T18:01:03 | asiddhant | [] | . | true |
825,942,108 | https://api.github.com/repos/huggingface/datasets/issues/2015 | https://github.com/huggingface/datasets/pull/2015 | 2,015 | Fix ipython function creation in tests | closed | 0 | 2021-03-09T13:36:59 | 2021-03-09T14:06:04 | 2021-03-09T14:06:03 | lhoestq | [] | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | true |
825,916,531 | https://api.github.com/repos/huggingface/datasets/issues/2014 | https://github.com/huggingface/datasets/pull/2014 | 2,014 | more explicit method parameters | closed | 0 | 2021-03-09T13:18:29 | 2021-03-10T10:08:37 | 2021-03-10T10:08:36 | theo-m | [] | re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method. | true |
825,694,305 | https://api.github.com/repos/huggingface/datasets/issues/2013 | https://github.com/huggingface/datasets/pull/2013 | 2,013 | Add Cryptonite dataset | closed | 0 | 2021-03-09T10:32:11 | 2021-03-09T19:27:07 | 2021-03-09T19:27:06 | theo-m | [] | cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite | true |
825,634,064 | https://api.github.com/repos/huggingface/datasets/issues/2012 | https://github.com/huggingface/datasets/issues/2012 | 2,012 | No upstream branch | closed | 2 | 2021-03-09T09:48:55 | 2021-03-09T11:33:31 | 2021-03-09T11:33:31 | theo-m | [
"documentation"
] | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | false |
825,621,952 | https://api.github.com/repos/huggingface/datasets/issues/2011 | https://github.com/huggingface/datasets/pull/2011 | 2,011 | Add RoSent Dataset | closed | 0 | 2021-03-09T09:40:08 | 2021-03-11T18:00:52 | 2021-03-11T18:00:52 | gchhablani | [] | This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.
I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.
Let me know in case of any issues. | true |
825,567,635 | https://api.github.com/repos/huggingface/datasets/issues/2010 | https://github.com/huggingface/datasets/issues/2010 | 2,010 | Local testing fails | closed | 3 | 2021-03-09T09:01:38 | 2021-03-09T14:06:03 | 2021-03-09T14:06:03 | theo-m | [
"bug"
] | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED... | false |
825,541,366 | https://api.github.com/repos/huggingface/datasets/issues/2009 | https://github.com/huggingface/datasets/issues/2009 | 2,009 | Ambiguous documentation | closed | 2 | 2021-03-09T08:42:11 | 2021-03-12T15:01:34 | 2021-03-12T15:01:34 | theo-m | [
"documentation"
] | https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR... | false |
825,153,804 | https://api.github.com/repos/huggingface/datasets/issues/2008 | https://github.com/huggingface/datasets/pull/2008 | 2,008 | Fix various typos/grammer in the docs | closed | 2 | 2021-03-09T01:39:28 | 2021-03-15T18:42:49 | 2021-03-09T10:21:32 | mariosasko | [] | This PR:
* fixes various typos/grammer I came across while reading the docs
* adds the "Install with conda" installation instructions
Closes #1959 | true |
824,518,158 | https://api.github.com/repos/huggingface/datasets/issues/2007 | https://github.com/huggingface/datasets/issues/2007 | 2,007 | How to not load huggingface datasets into memory | closed | 2 | 2021-03-08T12:35:26 | 2021-08-04T18:02:25 | 2021-08-04T18:02:25 | dorost1234 | [] | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_... | false |
824,457,794 | https://api.github.com/repos/huggingface/datasets/issues/2006 | https://github.com/huggingface/datasets/pull/2006 | 2,006 | Don't gitignore dvc.lock | closed | 0 | 2021-03-08T11:13:08 | 2021-03-08T11:28:35 | 2021-03-08T11:28:34 | lhoestq | [] | The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of
```
ERROR: 'dvc.lock' is git-ignored.
```
I removed the dvc.lock file from the gitignore to fix that | true |
824,275,035 | https://api.github.com/repos/huggingface/datasets/issues/2005 | https://github.com/huggingface/datasets/issues/2005 | 2,005 | Setting to torch format not working with torchvision and MNIST | closed | 9 | 2021-03-08T07:38:11 | 2021-03-09T17:58:13 | 2021-03-09T17:58:13 | gchhablani | [] | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | false |
824,080,760 | https://api.github.com/repos/huggingface/datasets/issues/2004 | https://github.com/huggingface/datasets/pull/2004 | 2,004 | LaRoSeDa | closed | 1 | 2021-03-08T01:06:32 | 2021-03-17T10:43:20 | 2021-03-17T10:43:20 | MihaelaGaman | [] | Add LaRoSeDa to huggingface datasets. | true |
824,034,678 | https://api.github.com/repos/huggingface/datasets/issues/2003 | https://github.com/huggingface/datasets/issues/2003 | 2,003 | Messages are being printed to the `stdout` | closed | 3 | 2021-03-07T22:09:34 | 2023-07-25T16:35:21 | 2023-07-25T16:35:21 | mahnerak | [] | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ... | false |
823,955,744 | https://api.github.com/repos/huggingface/datasets/issues/2002 | https://github.com/huggingface/datasets/pull/2002 | 2,002 | MOROCO | closed | 1 | 2021-03-07T16:22:17 | 2021-03-19T09:52:06 | 2021-03-19T09:52:06 | MihaelaGaman | [] | Add MOROCO to huggingface datasets. | true |
823,946,706 | https://api.github.com/repos/huggingface/datasets/issues/2001 | https://github.com/huggingface/datasets/issues/2001 | 2,001 | Empty evidence document ("provenance") in KILT ELI5 dataset | closed | 1 | 2021-03-07T15:41:35 | 2022-12-19T19:25:14 | 2021-03-17T05:51:01 | donggyukimc | [] | In the original KILT benchmark(https://github.com/facebookresearch/KILT),
all samples has its evidence document (i.e. wikipedia page id) for prediction.
For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this
`{"id": "1kiwfx", "input": "In Trading Places (1983... | false |
823,899,910 | https://api.github.com/repos/huggingface/datasets/issues/2000 | https://github.com/huggingface/datasets/issues/2000 | 2,000 | Windows Permission Error (most recent version of datasets) | closed | 5 | 2021-03-07T11:55:28 | 2021-03-09T12:42:57 | 2021-03-09T12:42:57 | itsLuisa | [] | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am... | false |
823,753,591 | https://api.github.com/repos/huggingface/datasets/issues/1999 | https://github.com/huggingface/datasets/pull/1999 | 1,999 | Add FashionMNIST dataset | closed | 1 | 2021-03-06T21:36:57 | 2021-03-09T09:52:11 | 2021-03-09T09:52:11 | gchhablani | [] | This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | true |
823,723,960 | https://api.github.com/repos/huggingface/datasets/issues/1998 | https://github.com/huggingface/datasets/pull/1998 | 1,998 | Add -DOCSTART- note to dataset card of conll-like datasets | closed | 1 | 2021-03-06T19:08:29 | 2021-03-11T02:20:07 | 2021-03-11T02:20:07 | mariosasko | [] | Closes #1983 | true |
823,679,465 | https://api.github.com/repos/huggingface/datasets/issues/1997 | https://github.com/huggingface/datasets/issues/1997 | 1,997 | from datasets import MoleculeDataset, GEOMDataset | closed | 0 | 2021-03-06T15:50:19 | 2021-03-06T16:13:26 | 2021-03-06T16:13:26 | futianfan | [
"dataset request"
] | I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks! | false |
823,573,410 | https://api.github.com/repos/huggingface/datasets/issues/1996 | https://github.com/huggingface/datasets/issues/1996 | 1,996 | Error when exploring `arabic_speech_corpus` | closed | 3 | 2021-03-06T05:55:20 | 2022-10-05T13:24:26 | 2022-10-05T13:24:26 | elgeish | [
"bug",
"nlp-viewer",
"speech"
] | Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/p... | false |
822,878,431 | https://api.github.com/repos/huggingface/datasets/issues/1995 | https://github.com/huggingface/datasets/pull/1995 | 1,995 | [Timit_asr] Make sure not only the first sample is used | closed | 4 | 2021-03-05T08:42:51 | 2021-06-30T06:25:53 | 2021-03-05T08:58:59 | patrickvonplaten | [] | When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded. | true |
822,871,238 | https://api.github.com/repos/huggingface/datasets/issues/1994 | https://github.com/huggingface/datasets/issues/1994 | 1,994 | not being able to get wikipedia es language | open | 8 | 2021-03-05T08:31:48 | 2021-03-11T20:46:21 | null | dorost1234 | [] | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | false |
822,758,387 | https://api.github.com/repos/huggingface/datasets/issues/1993 | https://github.com/huggingface/datasets/issues/1993 | 1,993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | closed | 7 | 2021-03-05T05:25:50 | 2021-03-22T04:05:50 | 2021-03-22T04:05:50 | shamanez | [] | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | false |
822,672,238 | https://api.github.com/repos/huggingface/datasets/issues/1992 | https://github.com/huggingface/datasets/issues/1992 | 1,992 | `datasets.map` multi processing much slower than single processing | open | 14 | 2021-03-05T02:10:02 | 2024-06-08T20:18:03 | null | hwijeen | [
"bug"
] | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | false |
822,554,473 | https://api.github.com/repos/huggingface/datasets/issues/1991 | https://github.com/huggingface/datasets/pull/1991 | 1,991 | Adding the conllpp dataset | closed | 1 | 2021-03-04T22:19:43 | 2021-03-17T10:37:39 | 2021-03-17T10:37:39 | ZihanWangKi | [] | Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910. | true |
822,384,502 | https://api.github.com/repos/huggingface/datasets/issues/1990 | https://github.com/huggingface/datasets/issues/1990 | 1,990 | OSError: Memory mapping file failed: Cannot allocate memory | closed | 6 | 2021-03-04T18:21:58 | 2021-08-04T18:04:25 | 2021-08-04T18:04:25 | dorost1234 | [] | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | false |
822,328,147 | https://api.github.com/repos/huggingface/datasets/issues/1989 | https://github.com/huggingface/datasets/issues/1989 | 1,989 | Question/problem with dataset labels | closed | 10 | 2021-03-04T17:06:53 | 2023-07-24T14:39:33 | 2023-07-24T14:39:33 | ioana-blue | [] | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | false |
822,324,605 | https://api.github.com/repos/huggingface/datasets/issues/1988 | https://github.com/huggingface/datasets/issues/1988 | 1,988 | Readme.md is misleading about kinds of datasets? | closed | 1 | 2021-03-04T17:04:20 | 2021-08-04T18:05:23 | 2021-08-04T18:05:23 | surak | [] | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You menti... | false |
822,308,956 | https://api.github.com/repos/huggingface/datasets/issues/1987 | https://github.com/huggingface/datasets/issues/1987 | 1,987 | wmt15 is broken | closed | 1 | 2021-03-04T16:46:25 | 2022-10-05T13:12:26 | 2022-10-05T13:12:26 | stas00 | [] | While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:
```
python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")'
Downloading: 2.91kB [00:00, 818kB/s]
Downloading: 3.02kB [00:00, 897kB/s]
Downloading: 41.1kB [00:00, 19.1MB/s]
Downloading and preparing da... | false |
822,176,290 | https://api.github.com/repos/huggingface/datasets/issues/1986 | https://github.com/huggingface/datasets/issues/1986 | 1,986 | wmt datasets fail to load | closed | 1 | 2021-03-04T14:18:55 | 2021-03-04T14:31:07 | 2021-03-04T14:31:07 | sabania | [] | ~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 e... | false |
822,170,651 | https://api.github.com/repos/huggingface/datasets/issues/1985 | https://github.com/huggingface/datasets/pull/1985 | 1,985 | Optimize int precision | closed | 8 | 2021-03-04T14:12:23 | 2021-03-22T12:04:40 | 2021-03-16T09:44:00 | albertvillanova | [] | Optimize int precision to reduce dataset file size.
Close #1973, close #1825, close #861. | true |
821,816,588 | https://api.github.com/repos/huggingface/datasets/issues/1984 | https://github.com/huggingface/datasets/issues/1984 | 1,984 | Add tests for WMT datasets | closed | 1 | 2021-03-04T06:46:42 | 2022-11-04T14:19:16 | 2022-11-04T14:19:16 | albertvillanova | [] | As requested in #1981, we need tests for WMT datasets, using dummy data. | false |
821,746,008 | https://api.github.com/repos/huggingface/datasets/issues/1983 | https://github.com/huggingface/datasets/issues/1983 | 1,983 | The size of CoNLL-2003 is not consistant with the official release. | closed | 4 | 2021-03-04T04:41:34 | 2022-10-05T13:13:26 | 2022-10-05T13:13:26 | h-peng17 | [] | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | false |
821,448,791 | https://api.github.com/repos/huggingface/datasets/issues/1982 | https://github.com/huggingface/datasets/pull/1982 | 1,982 | Fix NestedDataStructure.data for empty dict | closed | 5 | 2021-03-03T20:16:51 | 2021-03-04T16:46:04 | 2021-03-03T22:48:36 | albertvillanova | [] | Fix #1981 | true |
821,411,109 | https://api.github.com/repos/huggingface/datasets/issues/1981 | https://github.com/huggingface/datasets/issues/1981 | 1,981 | wmt datasets fail to load | closed | 6 | 2021-03-03T19:21:39 | 2021-03-04T14:16:47 | 2021-03-03T22:48:36 | stas00 | [] | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150... | false |
821,312,810 | https://api.github.com/repos/huggingface/datasets/issues/1980 | https://github.com/huggingface/datasets/pull/1980 | 1,980 | Loading all answers from drop | closed | 2 | 2021-03-03T17:13:07 | 2021-03-15T11:27:26 | 2021-03-15T11:27:26 | KaijuML | [] | Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I could... | true |
820,977,853 | https://api.github.com/repos/huggingface/datasets/issues/1979 | https://github.com/huggingface/datasets/pull/1979 | 1,979 | Add article_id and process test set template for semeval 2020 task 11β¦ | closed | 3 | 2021-03-03T10:34:32 | 2021-03-13T10:59:40 | 2021-03-12T13:10:50 | hemildesai | [] | β¦ dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for t... | true |
820,956,806 | https://api.github.com/repos/huggingface/datasets/issues/1978 | https://github.com/huggingface/datasets/pull/1978 | 1,978 | Adding ro sts dataset | closed | 3 | 2021-03-03T10:08:53 | 2021-03-05T10:00:14 | 2021-03-05T09:33:55 | lorinczb | [] | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | true |
820,312,022 | https://api.github.com/repos/huggingface/datasets/issues/1977 | https://github.com/huggingface/datasets/issues/1977 | 1,977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | open | 2 | 2021-03-02T19:21:28 | 2021-03-03T10:17:40 | null | dorost1234 | [] | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l... | false |
820,228,538 | https://api.github.com/repos/huggingface/datasets/issues/1976 | https://github.com/huggingface/datasets/pull/1976 | 1,976 | Add datasets full offline mode with HF_DATASETS_OFFLINE | closed | 0 | 2021-03-02T17:26:59 | 2021-03-03T15:45:31 | 2021-03-03T15:45:30 | lhoestq | [] | Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00 | true |
820,205,485 | https://api.github.com/repos/huggingface/datasets/issues/1975 | https://github.com/huggingface/datasets/pull/1975 | 1,975 | Fix flake8 | closed | 0 | 2021-03-02T16:59:13 | 2021-03-04T10:43:22 | 2021-03-04T10:43:22 | albertvillanova | [] | Fix flake8 style. | true |
820,122,223 | https://api.github.com/repos/huggingface/datasets/issues/1974 | https://github.com/huggingface/datasets/pull/1974 | 1,974 | feat(docs): navigate with left/right arrow keys | closed | 0 | 2021-03-02T15:24:50 | 2021-03-04T10:44:12 | 2021-03-04T10:42:48 | ydcjeff | [] | Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | true |
820,077,312 | https://api.github.com/repos/huggingface/datasets/issues/1973 | https://github.com/huggingface/datasets/issues/1973 | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | closed | 8 | 2021-03-02T14:35:53 | 2021-03-30T14:03:59 | 2021-03-16T09:44:00 | ioana-blue | [] | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | false |
819,752,761 | https://api.github.com/repos/huggingface/datasets/issues/1972 | https://github.com/huggingface/datasets/issues/1972 | 1,972 | 'Dataset' object has no attribute 'rename_column' | closed | 1 | 2021-03-02T08:01:49 | 2022-06-01T16:08:47 | 2022-06-01T16:08:47 | farooqzaman1 | [] | 'Dataset' object has no attribute 'rename_column' | false |
819,714,231 | https://api.github.com/repos/huggingface/datasets/issues/1971 | https://github.com/huggingface/datasets/pull/1971 | 1,971 | Fix ArrowWriter closes stream at exit | closed | 7 | 2021-03-02T07:12:34 | 2021-03-10T16:36:57 | 2021-03-10T16:36:57 | albertvillanova | [] | Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method.
Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` ... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.