id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
775,914,320 | https://api.github.com/repos/huggingface/datasets/issues/1663 | https://github.com/huggingface/datasets/pull/1663 | 1,663 | update saving and loading methods for faiss index so to accept path l… | closed | 1 | 2020-12-29T14:15:37 | 2021-01-18T09:27:23 | 2021-01-18T09:27:23 | tslott | [] | - Update saving and loading methods for faiss index so to accept path like objects from pathlib
The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes bec... | true |
775,890,154 | https://api.github.com/repos/huggingface/datasets/issues/1662 | https://github.com/huggingface/datasets/issues/1662 | 1,662 | Arrow file is too large when saving vector data | closed | 4 | 2020-12-29T13:23:12 | 2021-01-21T14:12:39 | 2021-01-21T14:12:39 | weiwangorg | [] | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | false |
775,840,801 | https://api.github.com/repos/huggingface/datasets/issues/1661 | https://github.com/huggingface/datasets/pull/1661 | 1,661 | updated dataset cards | closed | 0 | 2020-12-29T11:20:40 | 2020-12-30T17:15:16 | 2020-12-30T17:15:16 | Nilanshrajput | [] | added dataset instance in the card. | true |
775,831,423 | https://api.github.com/repos/huggingface/datasets/issues/1660 | https://github.com/huggingface/datasets/pull/1660 | 1,660 | add dataset info | closed | 0 | 2020-12-29T10:58:19 | 2020-12-30T17:04:30 | 2020-12-30T17:04:30 | harshalmittal4 | [] | true | |
775,831,288 | https://api.github.com/repos/huggingface/datasets/issues/1659 | https://github.com/huggingface/datasets/pull/1659 | 1,659 | update dataset info | closed | 0 | 2020-12-29T10:58:01 | 2020-12-30T16:55:07 | 2020-12-30T16:55:07 | harshalmittal4 | [] | true | |
775,651,085 | https://api.github.com/repos/huggingface/datasets/issues/1658 | https://github.com/huggingface/datasets/pull/1658 | 1,658 | brwac dataset: add instances and data splits info | closed | 0 | 2020-12-29T01:24:45 | 2020-12-30T16:54:26 | 2020-12-30T16:54:26 | jonatasgrosman | [] | true | |
775,647,000 | https://api.github.com/repos/huggingface/datasets/issues/1657 | https://github.com/huggingface/datasets/pull/1657 | 1,657 | mac_morpho dataset: add data splits info | closed | 0 | 2020-12-29T01:05:21 | 2020-12-30T16:51:24 | 2020-12-30T16:51:24 | jonatasgrosman | [] | true | |
775,645,356 | https://api.github.com/repos/huggingface/datasets/issues/1656 | https://github.com/huggingface/datasets/pull/1656 | 1,656 | assin 2 dataset: add instances and data splits info | closed | 0 | 2020-12-29T00:57:51 | 2020-12-30T16:50:56 | 2020-12-30T16:50:56 | jonatasgrosman | [] | true | |
775,643,418 | https://api.github.com/repos/huggingface/datasets/issues/1655 | https://github.com/huggingface/datasets/pull/1655 | 1,655 | assin dataset: add instances and data splits info | closed | 0 | 2020-12-29T00:47:56 | 2020-12-30T16:50:23 | 2020-12-30T16:50:23 | jonatasgrosman | [] | true | |
775,640,729 | https://api.github.com/repos/huggingface/datasets/issues/1654 | https://github.com/huggingface/datasets/pull/1654 | 1,654 | lener_br dataset: add instances and data splits info | closed | 0 | 2020-12-29T00:35:12 | 2020-12-30T16:49:32 | 2020-12-30T16:49:32 | jonatasgrosman | [] | true | |
775,632,945 | https://api.github.com/repos/huggingface/datasets/issues/1653 | https://github.com/huggingface/datasets/pull/1653 | 1,653 | harem dataset: add data splits info | closed | 0 | 2020-12-28T23:58:20 | 2020-12-30T16:49:03 | 2020-12-30T16:49:03 | jonatasgrosman | [] | true | |
775,571,813 | https://api.github.com/repos/huggingface/datasets/issues/1652 | https://github.com/huggingface/datasets/pull/1652 | 1,652 | Update dataset cards from previous sprint | closed | 0 | 2020-12-28T20:20:47 | 2020-12-30T16:48:04 | 2020-12-30T16:48:04 | j-chim | [] | This PR updates the dataset cards/readmes for the 4 approved PRs I submitted in the previous sprint. | true |
775,554,319 | https://api.github.com/repos/huggingface/datasets/issues/1651 | https://github.com/huggingface/datasets/pull/1651 | 1,651 | Add twi wordsim353 | closed | 3 | 2020-12-28T19:31:55 | 2021-01-04T09:39:39 | 2021-01-04T09:39:38 | dadelani | [] | Added the citation information to the README file | true |
775,545,912 | https://api.github.com/repos/huggingface/datasets/issues/1650 | https://github.com/huggingface/datasets/pull/1650 | 1,650 | Update README.md | closed | 0 | 2020-12-28T19:09:05 | 2020-12-29T10:43:14 | 2020-12-29T10:43:14 | MisbahKhan789 | [] | added dataset summary | true |
775,544,487 | https://api.github.com/repos/huggingface/datasets/issues/1649 | https://github.com/huggingface/datasets/pull/1649 | 1,649 | Update README.md | closed | 0 | 2020-12-28T19:05:00 | 2020-12-29T10:50:58 | 2020-12-29T10:43:03 | MisbahKhan789 | [] | Added information in the dataset card | true |
775,542,360 | https://api.github.com/repos/huggingface/datasets/issues/1648 | https://github.com/huggingface/datasets/pull/1648 | 1,648 | Update README.md | closed | 0 | 2020-12-28T18:59:06 | 2020-12-29T10:39:14 | 2020-12-29T10:39:14 | MisbahKhan789 | [] | added dataset summary | true |
775,525,799 | https://api.github.com/repos/huggingface/datasets/issues/1647 | https://github.com/huggingface/datasets/issues/1647 | 1,647 | NarrativeQA fails to load with `load_dataset` | closed | 3 | 2020-12-28T18:16:09 | 2021-01-05T12:05:08 | 2021-01-03T17:58:05 | eric-mitchell | [] | When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://r... | false |
775,499,344 | https://api.github.com/repos/huggingface/datasets/issues/1646 | https://github.com/huggingface/datasets/pull/1646 | 1,646 | Add missing homepage in some dataset cards | closed | 0 | 2020-12-28T17:09:48 | 2021-01-04T14:08:57 | 2021-01-04T14:08:56 | lhoestq | [] | In some dataset cards the homepage field in the `Dataset Description` section was missing/empty | true |
775,473,106 | https://api.github.com/repos/huggingface/datasets/issues/1645 | https://github.com/huggingface/datasets/pull/1645 | 1,645 | Rename "part-of-speech-tagging" tag in some dataset cards | closed | 0 | 2020-12-28T16:09:09 | 2021-01-07T10:08:14 | 2021-01-07T10:08:13 | lhoestq | [] | `part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction` | true |
775,375,880 | https://api.github.com/repos/huggingface/datasets/issues/1644 | https://github.com/huggingface/datasets/issues/1644 | 1,644 | HoVeR dataset fails to load | closed | 1 | 2020-12-28T12:27:07 | 2022-10-05T12:40:34 | 2022-10-05T12:40:34 | urikz | [] | Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.
Steps to reproduce the error:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("hover")
Traceback (most recent call last):
... | false |
775,280,046 | https://api.github.com/repos/huggingface/datasets/issues/1643 | https://github.com/huggingface/datasets/issues/1643 | 1,643 | Dataset social_bias_frames 404 | closed | 1 | 2020-12-28T08:35:34 | 2020-12-28T08:38:07 | 2020-12-28T08:38:07 | atemate | [] | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("social_bias_frames")
...
Downloading and preparing dataset social_bias_frames/default
...
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, ... | false |
775,159,568 | https://api.github.com/repos/huggingface/datasets/issues/1642 | https://github.com/huggingface/datasets/pull/1642 | 1,642 | Ollie dataset | closed | 0 | 2020-12-28T02:43:37 | 2021-01-04T13:35:25 | 2021-01-04T13:35:24 | huu4ontocord | [] | This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details. | true |
775,110,872 | https://api.github.com/repos/huggingface/datasets/issues/1641 | https://github.com/huggingface/datasets/issues/1641 | 1,641 | muchocine dataset cannot be dowloaded | closed | 5 | 2020-12-27T21:26:28 | 2021-08-03T05:07:29 | 2021-08-03T05:07:29 | mrm8488 | [
"wontfix",
"dataset bug"
] | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | false |
774,921,836 | https://api.github.com/repos/huggingface/datasets/issues/1640 | https://github.com/huggingface/datasets/pull/1640 | 1,640 | Fix "'BertTokenizerFast' object has no attribute 'max_len'" | closed | 0 | 2020-12-26T19:25:41 | 2020-12-28T17:26:35 | 2020-12-28T17:26:35 | mflis | [] | Tensorflow 2.3.0 gives:
FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
Tensorflow 2.4.0 gives:
AttributeError 'BertTokenizerFast' object has no attribute 'max_len' | true |
774,903,472 | https://api.github.com/repos/huggingface/datasets/issues/1639 | https://github.com/huggingface/datasets/issues/1639 | 1,639 | bug with sst2 in glue | closed | 3 | 2020-12-26T16:57:23 | 2022-10-05T12:40:16 | 2022-10-05T12:40:16 | ghost | [] | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ... | false |
774,869,184 | https://api.github.com/repos/huggingface/datasets/issues/1638 | https://github.com/huggingface/datasets/pull/1638 | 1,638 | Add id_puisi dataset | closed | 0 | 2020-12-26T12:41:55 | 2020-12-30T16:34:17 | 2020-12-30T16:34:17 | ilhamfp | [] | Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :) | true |
774,710,014 | https://api.github.com/repos/huggingface/datasets/issues/1637 | https://github.com/huggingface/datasets/pull/1637 | 1,637 | Added `pn_summary` dataset | closed | 2 | 2020-12-25T11:01:24 | 2021-01-04T13:43:19 | 2021-01-04T13:43:19 | m3hrdadfi | [] | #1635
You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)! | true |
774,574,378 | https://api.github.com/repos/huggingface/datasets/issues/1636 | https://github.com/huggingface/datasets/issues/1636 | 1,636 | winogrande cannot be dowloaded | closed | 2 | 2020-12-24T22:28:22 | 2022-10-05T12:35:44 | 2022-10-05T12:35:44 | ghost | [] | Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]
File "./finetune_trainer.py", ... | false |
774,524,492 | https://api.github.com/repos/huggingface/datasets/issues/1635 | https://github.com/huggingface/datasets/issues/1635 | 1,635 | Persian Abstractive/Extractive Text Summarization | closed | 0 | 2020-12-24T17:47:12 | 2021-01-04T15:11:04 | 2021-01-04T15:11:04 | m3hrdadfi | [
"dataset request"
] | Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included.
## Adding a Dataset
- **Name:** *pn-summary*
- **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abs... | false |
774,487,934 | https://api.github.com/repos/huggingface/datasets/issues/1634 | https://github.com/huggingface/datasets/issues/1634 | 1,634 | Inspecting datasets per category | closed | 4 | 2020-12-24T15:26:34 | 2022-10-04T14:57:33 | 2022-10-04T14:57:33 | ghost | [] | Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq | false |
774,422,603 | https://api.github.com/repos/huggingface/datasets/issues/1633 | https://github.com/huggingface/datasets/issues/1633 | 1,633 | social_i_qa wrong format of labels | closed | 2 | 2020-12-24T13:11:54 | 2020-12-30T17:18:49 | 2020-12-30T17:18:49 | ghost | [] | Hi,
there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent.
so label is 'label': '1\n', not '1'
thanks
```
>>> import datasets
>>> from datasets import load_dataset
>>> dataset = load_dataset(
... 'social_i_qa')
cahce dir /jul... | false |
774,388,625 | https://api.github.com/repos/huggingface/datasets/issues/1632 | https://github.com/huggingface/datasets/issues/1632 | 1,632 | SICK dataset | closed | 0 | 2020-12-24T12:40:14 | 2021-02-05T15:49:25 | 2021-02-05T15:49:25 | rabeehk | [
"dataset request"
] | Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you.
## Adding a Dataset
- **Name:** SICK
- **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical,... | false |
774,349,222 | https://api.github.com/repos/huggingface/datasets/issues/1631 | https://github.com/huggingface/datasets/pull/1631 | 1,631 | Update README.md | closed | 0 | 2020-12-24T11:45:52 | 2020-12-28T17:35:41 | 2020-12-28T17:16:04 | savasy | [] | I made small change for citation | true |
774,332,129 | https://api.github.com/repos/huggingface/datasets/issues/1630 | https://github.com/huggingface/datasets/issues/1630 | 1,630 | Adding UKP Argument Aspect Similarity Corpus | closed | 3 | 2020-12-24T11:01:31 | 2022-10-05T12:36:12 | 2022-10-05T12:36:12 | rabeehk | [
"dataset request"
] | Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei... | false |
774,255,716 | https://api.github.com/repos/huggingface/datasets/issues/1629 | https://github.com/huggingface/datasets/pull/1629 | 1,629 | add wongnai_reviews test set labels | closed | 0 | 2020-12-24T08:02:31 | 2020-12-28T17:23:39 | 2020-12-28T17:23:39 | cstorm125 | [] | - add test set labels provided by @ekapolc
- refactor `star_rating` to a `datasets.features.ClassLabel` field | true |
774,091,411 | https://api.github.com/repos/huggingface/datasets/issues/1628 | https://github.com/huggingface/datasets/pull/1628 | 1,628 | made suggested changes to hate-speech-and-offensive-language | closed | 0 | 2020-12-23T23:25:32 | 2020-12-28T10:11:20 | 2020-12-28T10:11:20 | MisbahKhan789 | [] | true | |
773,960,255 | https://api.github.com/repos/huggingface/datasets/issues/1627 | https://github.com/huggingface/datasets/issues/1627 | 1,627 | `Dataset.map` disable progress bar | closed | 5 | 2020-12-23T17:53:42 | 2025-05-16T16:36:24 | 2020-12-26T19:57:17 | Nickil21 | [] | I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that? | false |
773,840,368 | https://api.github.com/repos/huggingface/datasets/issues/1626 | https://github.com/huggingface/datasets/pull/1626 | 1,626 | Fix dataset_dict.shuffle with single seed | closed | 0 | 2020-12-23T14:33:36 | 2021-01-04T10:00:04 | 2021-01-04T10:00:03 | lhoestq | [] | Fix #1610
I added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed.
Moreover I added the missing `seed` parameter. Previously only `seeds` was allowed. | true |
773,771,596 | https://api.github.com/repos/huggingface/datasets/issues/1625 | https://github.com/huggingface/datasets/pull/1625 | 1,625 | Fixed bug in the shape property | closed | 0 | 2020-12-23T13:33:21 | 2021-01-02T23:22:52 | 2020-12-23T14:13:13 | noaonoszko | [] | Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`. | true |
773,669,700 | https://api.github.com/repos/huggingface/datasets/issues/1624 | https://github.com/huggingface/datasets/issues/1624 | 1,624 | Cannot download ade_corpus_v2 | closed | 2 | 2020-12-23T10:58:14 | 2021-08-03T05:08:54 | 2021-08-03T05:08:54 | him1411 | [] | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_con... | false |
772,950,710 | https://api.github.com/repos/huggingface/datasets/issues/1623 | https://github.com/huggingface/datasets/pull/1623 | 1,623 | Add CLIMATE-FEVER dataset | closed | 1 | 2020-12-22T13:34:05 | 2020-12-22T17:53:53 | 2020-12-22T17:53:53 | tdiggelm | [] | As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579.
---
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the Eng... | true |
772,940,768 | https://api.github.com/repos/huggingface/datasets/issues/1622 | https://github.com/huggingface/datasets/issues/1622 | 1,622 | Can't call shape on the output of select() | closed | 2 | 2020-12-22T13:18:40 | 2020-12-23T13:37:13 | 2020-12-23T13:37:12 | noaonoszko | [] | I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`.
It's line 531 in shape in arrow_dataset.py that causes the problem:
``return tuple(self._indices.num_rows, self._data.num_columns)``
This makes sense, since `tuple(num1, num2)` is not a valid call.... | false |
772,940,417 | https://api.github.com/repos/huggingface/datasets/issues/1621 | https://github.com/huggingface/datasets/pull/1621 | 1,621 | updated dutch_social.py for loading jsonl (lines instead of list) files | closed | 0 | 2020-12-22T13:18:11 | 2020-12-23T11:51:51 | 2020-12-23T11:51:51 | skyprince999 | [] | the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records
Pls refer to previous PR #1321 | true |
772,620,056 | https://api.github.com/repos/huggingface/datasets/issues/1620 | https://github.com/huggingface/datasets/pull/1620 | 1,620 | Adding myPOS2017 dataset | closed | 4 | 2020-12-22T04:04:55 | 2022-10-03T09:38:23 | 2022-10-03T09:38:23 | hungluumfc | [
"dataset contribution"
] | myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments | true |
772,508,558 | https://api.github.com/repos/huggingface/datasets/issues/1619 | https://github.com/huggingface/datasets/pull/1619 | 1,619 | data loader for reading comprehension task | closed | 2 | 2020-12-21T22:40:34 | 2020-12-28T10:32:53 | 2020-12-28T10:32:53 | songfeng | [] | added doc2dial data loader and dummy data for reading comprehension task. | true |
772,248,730 | https://api.github.com/repos/huggingface/datasets/issues/1618 | https://github.com/huggingface/datasets/issues/1618 | 1,618 | Can't filter language:EN on https://huggingface.co/datasets | closed | 3 | 2020-12-21T15:23:23 | 2020-12-22T17:17:00 | 2020-12-22T17:16:09 | davidefiocco | [] | When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:

- **Point of Contact:** [Mustafa Keskin](htt... | true |
771,641,088 | https://api.github.com/repos/huggingface/datasets/issues/1615 | https://github.com/huggingface/datasets/issues/1615 | 1,615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | open | 10 | 2020-12-20T17:27:38 | 2021-06-25T13:11:33 | null | SapirWeissbuch | [] | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | false |
771,577,050 | https://api.github.com/repos/huggingface/datasets/issues/1613 | https://github.com/huggingface/datasets/pull/1613 | 1,613 | Add id_clickbait | closed | 0 | 2020-12-20T12:24:49 | 2020-12-22T17:45:27 | 2020-12-22T17:45:27 | cahya-wirawan | [] | This is the CLICK-ID dataset, a collection of annotated clickbait Indonesian news headlines that was collected from 12 local online news | true |
771,558,160 | https://api.github.com/repos/huggingface/datasets/issues/1612 | https://github.com/huggingface/datasets/pull/1612 | 1,612 | Adding wiki asp dataset as new PR | closed | 0 | 2020-12-20T10:25:08 | 2020-12-21T14:13:33 | 2020-12-21T14:13:33 | katnoria | [] | Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20/30KB. | true |
771,486,456 | https://api.github.com/repos/huggingface/datasets/issues/1611 | https://github.com/huggingface/datasets/issues/1611 | 1,611 | shuffle with torch generator | closed | 8 | 2020-12-20T00:57:14 | 2022-06-01T15:30:13 | 2022-06-01T15:30:13 | rabeehkarimimahabadi | [
"enhancement"
] | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | false |
771,453,599 | https://api.github.com/repos/huggingface/datasets/issues/1610 | https://github.com/huggingface/datasets/issues/1610 | 1,610 | shuffle does not accept seed | closed | 3 | 2020-12-19T20:59:39 | 2021-01-04T10:00:03 | 2021-01-04T10:00:03 | rabeehk | [
"bug"
] | Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
| false |
771,421,881 | https://api.github.com/repos/huggingface/datasets/issues/1609 | https://github.com/huggingface/datasets/issues/1609 | 1,609 | Not able to use 'jigsaw_toxicity_pred' dataset | closed | 2 | 2020-12-19T17:35:48 | 2020-12-22T16:42:24 | 2020-12-22T16:42:23 | jassimran | [] | When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):
```
from datasets import list_datasets, list_metrics, load_dataset, load_metric
ds = load_dataset("jigsaw_toxicity_pred")
```
I see below error:
>... | false |
771,329,434 | https://api.github.com/repos/huggingface/datasets/issues/1608 | https://github.com/huggingface/datasets/pull/1608 | 1,608 | adding ted_talks_iwslt | closed | 1 | 2020-12-19T07:36:41 | 2021-01-02T15:44:12 | 2021-01-02T15:44:11 | skyprince999 | [] | UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108)
Running the `pytest `went for more than 40+ hours and it was still running!
So working on a different approach, such that the number of configs = number of languages... | true |
771,325,852 | https://api.github.com/repos/huggingface/datasets/issues/1607 | https://github.com/huggingface/datasets/pull/1607 | 1,607 | modified tweets hate speech detection | closed | 0 | 2020-12-19T07:13:40 | 2020-12-21T16:08:48 | 2020-12-21T16:08:48 | darshan-gandhi | [] | true | |
771,116,455 | https://api.github.com/repos/huggingface/datasets/issues/1606 | https://github.com/huggingface/datasets/pull/1606 | 1,606 | added Semantic Scholar Open Research Corpus | closed | 1 | 2020-12-18T19:21:24 | 2021-02-03T09:30:59 | 2021-02-03T09:30:59 | bhavitvyamalik | [] | I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with t... | true |
770,979,620 | https://api.github.com/repos/huggingface/datasets/issues/1605 | https://github.com/huggingface/datasets/issues/1605 | 1,605 | Navigation version breaking | closed | 1 | 2020-12-18T15:36:24 | 2022-10-05T12:35:11 | 2022-10-05T12:35:11 | mttk | [] | Hi,
when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version:
 and wrap it in a [HttpAdapter](https://re... | true |
770,841,810 | https://api.github.com/repos/huggingface/datasets/issues/1602 | https://github.com/huggingface/datasets/pull/1602 | 1,602 | second update of id_newspapers_2018 | closed | 0 | 2020-12-18T12:16:37 | 2020-12-22T10:41:15 | 2020-12-22T10:41:14 | cahya-wirawan | [] | The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"].
I added also an additional POC. | true |
770,758,914 | https://api.github.com/repos/huggingface/datasets/issues/1601 | https://github.com/huggingface/datasets/pull/1601 | 1,601 | second update of the id_newspapers_2018 | closed | 1 | 2020-12-18T10:10:20 | 2020-12-18T12:15:31 | 2020-12-18T12:15:31 | cahya-wirawan | [] | The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"].
I added also an additional POC. | true |
770,582,960 | https://api.github.com/repos/huggingface/datasets/issues/1600 | https://github.com/huggingface/datasets/issues/1600 | 1,600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | closed | 7 | 2020-12-18T05:37:10 | 2023-05-03T04:22:55 | 2020-12-21T07:38:58 | david-waterworth | [
"question"
] | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | false |
770,431,389 | https://api.github.com/repos/huggingface/datasets/issues/1599 | https://github.com/huggingface/datasets/pull/1599 | 1,599 | add Korean Sarcasm Dataset | closed | 0 | 2020-12-17T22:49:56 | 2021-09-17T16:54:32 | 2020-12-23T17:25:59 | stevhliu | [] | true | |
770,332,440 | https://api.github.com/repos/huggingface/datasets/issues/1598 | https://github.com/huggingface/datasets/pull/1598 | 1,598 | made suggested changes in fake-news-english | closed | 0 | 2020-12-17T20:06:29 | 2020-12-18T09:43:58 | 2020-12-18T09:43:57 | MisbahKhan789 | [] | true | |
770,276,140 | https://api.github.com/repos/huggingface/datasets/issues/1597 | https://github.com/huggingface/datasets/pull/1597 | 1,597 | adding hate-speech-and-offensive-language | closed | 1 | 2020-12-17T18:35:15 | 2020-12-23T23:27:17 | 2020-12-23T23:27:16 | MisbahKhan789 | [] | true | |
770,260,531 | https://api.github.com/repos/huggingface/datasets/issues/1596 | https://github.com/huggingface/datasets/pull/1596 | 1,596 | made suggested changes to hate-speech-and-offensive-language | closed | 0 | 2020-12-17T18:09:26 | 2020-12-17T18:36:02 | 2020-12-17T18:35:53 | MisbahKhan789 | [] | true | |
770,153,693 | https://api.github.com/repos/huggingface/datasets/issues/1595 | https://github.com/huggingface/datasets/pull/1595 | 1,595 | Logiqa en | closed | 8 | 2020-12-17T15:42:00 | 2022-10-03T09:38:30 | 2022-10-03T09:38:30 | aclifton314 | [
"dataset contribution"
] | logiqa in english. | true |
769,747,767 | https://api.github.com/repos/huggingface/datasets/issues/1594 | https://github.com/huggingface/datasets/issues/1594 | 1,594 | connection error | closed | 4 | 2020-12-17T09:18:34 | 2022-06-01T15:33:42 | 2022-06-01T15:33:41 | rabeehkarimimahabadi | [] | Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
File "finetune_t5_tr... | false |
769,611,386 | https://api.github.com/repos/huggingface/datasets/issues/1593 | https://github.com/huggingface/datasets/issues/1593 | 1,593 | Access to key in DatasetDict map | closed | 3 | 2020-12-17T07:02:20 | 2022-10-05T13:47:28 | 2022-10-05T12:33:06 | ZhaofengWu | [
"enhancement"
] | It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be n... | false |
769,383,714 | https://api.github.com/repos/huggingface/datasets/issues/1591 | https://github.com/huggingface/datasets/issues/1591 | 1,591 | IWSLT-17 Link Broken | closed | 2 | 2020-12-17T00:46:42 | 2020-12-18T08:06:36 | 2020-12-18T08:05:28 | ZhaofengWu | [
"duplicate",
"dataset bug"
] | ```
FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz
``` | false |
769,242,858 | https://api.github.com/repos/huggingface/datasets/issues/1590 | https://github.com/huggingface/datasets/issues/1590 | 1,590 | Add helper to resolve namespace collision | closed | 5 | 2020-12-16T20:17:24 | 2022-06-01T15:32:04 | 2022-06-01T15:32:04 | jramapuram | [] | Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. | false |
769,187,141 | https://api.github.com/repos/huggingface/datasets/issues/1589 | https://github.com/huggingface/datasets/pull/1589 | 1,589 | Update doc2dial.py | closed | 1 | 2020-12-16T18:50:56 | 2022-07-06T15:19:57 | 2022-07-06T15:19:57 | songfeng | [] | Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper. | true |
769,068,227 | https://api.github.com/repos/huggingface/datasets/issues/1588 | https://github.com/huggingface/datasets/pull/1588 | 1,588 | Modified hind encorp | closed | 1 | 2020-12-16T16:28:14 | 2020-12-16T22:41:53 | 2020-12-16T17:20:28 | rahul-art | [] | description added, unnecessary comments removed from .py and readme.md reformated
@lhoestq for #1584 | true |
768,929,877 | https://api.github.com/repos/huggingface/datasets/issues/1587 | https://github.com/huggingface/datasets/pull/1587 | 1,587 | Add nq_open question answering dataset | closed | 1 | 2020-12-16T14:22:08 | 2020-12-17T16:07:10 | 2020-12-17T16:07:10 | Nilanshrajput | [] | this is pr is a copy of #1506 due to messed up git history in that pr. | true |
768,864,502 | https://api.github.com/repos/huggingface/datasets/issues/1586 | https://github.com/huggingface/datasets/pull/1586 | 1,586 | added irc disentangle dataset | closed | 5 | 2020-12-16T13:25:58 | 2021-01-29T10:28:53 | 2021-01-29T10:28:53 | dhruvjoshi1998 | [] | added irc disentanglement dataset | true |
768,831,171 | https://api.github.com/repos/huggingface/datasets/issues/1585 | https://github.com/huggingface/datasets/issues/1585 | 1,585 | FileNotFoundError for `amazon_polarity` | closed | 1 | 2020-12-16T12:51:05 | 2020-12-16T16:02:56 | 2020-12-16T16:02:56 | phtephanx | [] | Version: `datasets==v1.1.3`
### Reproduction
```python
from datasets import load_dataset
data = load_dataset("amazon_polarity")
```
crashes with
```bash
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py
```
and
... | false |
768,820,406 | https://api.github.com/repos/huggingface/datasets/issues/1584 | https://github.com/huggingface/datasets/pull/1584 | 1,584 | Load hind encorp | closed | 0 | 2020-12-16T12:38:38 | 2020-12-18T02:27:24 | 2020-12-18T02:27:24 | rahul-art | [] | reformated well documented, yaml tags added, code | true |
768,795,986 | https://api.github.com/repos/huggingface/datasets/issues/1583 | https://github.com/huggingface/datasets/pull/1583 | 1,583 | Update metrics docstrings. | closed | 0 | 2020-12-16T12:14:18 | 2020-12-18T18:39:06 | 2020-12-18T18:39:06 | Fraser-Greenlee | [] | #1478 Correcting the argument descriptions for metrics.
Let me know if there's any issues.
| true |
768,776,617 | https://api.github.com/repos/huggingface/datasets/issues/1582 | https://github.com/huggingface/datasets/pull/1582 | 1,582 | Adding wiki lingua dataset as new branch | closed | 0 | 2020-12-16T11:53:07 | 2020-12-17T18:06:46 | 2020-12-17T18:06:45 | katnoria | [] | Adding the dataset as new branch as advised here: #1470
| true |
768,320,594 | https://api.github.com/repos/huggingface/datasets/issues/1581 | https://github.com/huggingface/datasets/issues/1581 | 1,581 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers' | closed | 5 | 2020-12-16T00:02:21 | 2021-06-17T15:40:45 | 2021-06-17T15:40:45 | eduardofv | [] | I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`:
```
$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data ... | false |
768,111,377 | https://api.github.com/repos/huggingface/datasets/issues/1580 | https://github.com/huggingface/datasets/pull/1580 | 1,580 | made suggested changes in diplomacy_detection.py | closed | 0 | 2020-12-15T19:52:00 | 2020-12-16T10:27:52 | 2020-12-16T10:27:52 | MisbahKhan789 | [] | true | |
767,808,465 | https://api.github.com/repos/huggingface/datasets/issues/1579 | https://github.com/huggingface/datasets/pull/1579 | 1,579 | Adding CLIMATE-FEVER dataset | closed | 5 | 2020-12-15T16:49:22 | 2020-12-22T13:43:16 | 2020-12-22T13:43:15 | tdiggelm | [] | This PR request the addition of the CLIMATE-FEVER dataset:
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, ref... | true |
767,760,513 | https://api.github.com/repos/huggingface/datasets/issues/1578 | https://github.com/huggingface/datasets/pull/1578 | 1,578 | update multiwozv22 checksums | closed | 0 | 2020-12-15T16:13:52 | 2020-12-15T17:06:29 | 2020-12-15T17:06:29 | yjernite | [] | a file was updated on the GitHub repo for the dataset | true |
767,342,432 | https://api.github.com/repos/huggingface/datasets/issues/1577 | https://github.com/huggingface/datasets/pull/1577 | 1,577 | Add comet metric | closed | 1 | 2020-12-15T08:56:00 | 2021-01-14T13:33:10 | 2021-01-14T13:33:10 | ricardorei | [] | Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics.
COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark.
We also participa... | true |
767,080,645 | https://api.github.com/repos/huggingface/datasets/issues/1576 | https://github.com/huggingface/datasets/pull/1576 | 1,576 | Remove the contributors section | closed | 0 | 2020-12-15T01:47:15 | 2020-12-15T12:53:47 | 2020-12-15T12:53:46 | clmnt | [] | sourcerer is down | true |
767,076,374 | https://api.github.com/repos/huggingface/datasets/issues/1575 | https://github.com/huggingface/datasets/pull/1575 | 1,575 | Hind_Encorp all done | closed | 11 | 2020-12-15T01:36:02 | 2020-12-16T15:15:17 | 2020-12-16T15:15:17 | rahul-art | [] | true | |
767,015,317 | https://api.github.com/repos/huggingface/datasets/issues/1574 | https://github.com/huggingface/datasets/pull/1574 | 1,574 | Diplomacy detection 3 | closed | 0 | 2020-12-14T23:28:51 | 2020-12-14T23:29:32 | 2020-12-14T23:29:32 | MisbahKhan789 | [] | true | |
767,011,938 | https://api.github.com/repos/huggingface/datasets/issues/1573 | https://github.com/huggingface/datasets/pull/1573 | 1,573 | adding dataset for diplomacy detection-2 | closed | 0 | 2020-12-14T23:21:37 | 2020-12-14T23:36:57 | 2020-12-14T23:36:57 | MisbahKhan789 | [] | true | |
767,008,470 | https://api.github.com/repos/huggingface/datasets/issues/1572 | https://github.com/huggingface/datasets/pull/1572 | 1,572 | add Gnad10 dataset | closed | 0 | 2020-12-14T23:15:02 | 2021-09-17T16:54:37 | 2020-12-16T16:52:30 | stevhliu | [] | reference [PR#1317](https://github.com/huggingface/datasets/pull/1317) | true |
766,981,721 | https://api.github.com/repos/huggingface/datasets/issues/1571 | https://github.com/huggingface/datasets/pull/1571 | 1,571 | Fixing the KILT tasks to match our current standards | closed | 0 | 2020-12-14T22:26:12 | 2020-12-14T23:07:41 | 2020-12-14T23:07:41 | yjernite | [] | This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task | true |
766,830,545 | https://api.github.com/repos/huggingface/datasets/issues/1570 | https://github.com/huggingface/datasets/pull/1570 | 1,570 | Documentation for loading CSV datasets misleads the user | closed | 0 | 2020-12-14T19:04:37 | 2020-12-22T19:30:12 | 2020-12-21T13:47:09 | onurgu | [] | Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting.
There are two problems here:
i) `quote_char' is misspelled, must be `quotechar'
ii) the documentation should mention `quoting' | true |
766,758,895 | https://api.github.com/repos/huggingface/datasets/issues/1569 | https://github.com/huggingface/datasets/pull/1569 | 1,569 | added un_ga dataset | closed | 0 | 2020-12-14T17:42:04 | 2020-12-15T15:28:58 | 2020-12-15T15:28:58 | param087 | [] | Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset.
With suggested changes in #1330 | true |
766,722,994 | https://api.github.com/repos/huggingface/datasets/issues/1568 | https://github.com/huggingface/datasets/pull/1568 | 1,568 | Added the dataset clickbait_news_bg | closed | 2 | 2020-12-14T17:03:00 | 2020-12-15T18:28:56 | 2020-12-15T18:28:56 | tsvm | [] | There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR. | true |
766,382,609 | https://api.github.com/repos/huggingface/datasets/issues/1567 | https://github.com/huggingface/datasets/pull/1567 | 1,567 | [wording] Update Readme.md | closed | 0 | 2020-12-14T12:34:52 | 2020-12-15T12:54:07 | 2020-12-15T12:54:06 | thomwolf | [] | Make the features of the library clearer. | true |
766,354,236 | https://api.github.com/repos/huggingface/datasets/issues/1566 | https://github.com/huggingface/datasets/pull/1566 | 1,566 | Add Microsoft Research Sequential Question Answering (SQA) Dataset | closed | 1 | 2020-12-14T12:02:30 | 2020-12-15T15:24:22 | 2020-12-15T15:24:22 | mattbui | [] | For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2 | true |
766,333,940 | https://api.github.com/repos/huggingface/datasets/issues/1565 | https://github.com/huggingface/datasets/pull/1565 | 1,565 | Create README.md | closed | 5 | 2020-12-14T11:40:23 | 2021-03-25T14:01:49 | 2021-03-25T14:01:49 | ManuelFay | [] | true | |
766,266,609 | https://api.github.com/repos/huggingface/datasets/issues/1564 | https://github.com/huggingface/datasets/pull/1564 | 1,564 | added saudinewsnet | closed | 9 | 2020-12-14T10:35:09 | 2020-12-22T09:51:04 | 2020-12-22T09:51:04 | abdulelahsm | [] | I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution | true |
766,211,931 | https://api.github.com/repos/huggingface/datasets/issues/1563 | https://github.com/huggingface/datasets/pull/1563 | 1,563 | adding tmu-gfm-dataset | closed | 2 | 2020-12-14T09:45:30 | 2020-12-21T10:21:04 | 2020-12-21T10:07:13 | forest1988 | [] | Adding TMU-GFM-Dataset for Grammatical Error Correction.
https://github.com/tmu-nlp/TMU-GFM-Dataset
A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs.
More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](ht... | true |
765,981,749 | https://api.github.com/repos/huggingface/datasets/issues/1562 | https://github.com/huggingface/datasets/pull/1562 | 1,562 | Add dataset COrpus of Urdu News TExt Reuse (COUNTER). | closed | 3 | 2020-12-14T06:32:48 | 2020-12-21T13:14:46 | 2020-12-21T13:14:46 | arkhalid | [] | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.