id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
785,606,286 | 1,737 | update link in TLC to be github links | closed | [] | 2021-01-14T02:49:21 | 2021-01-14T10:25:24 | 2021-01-14T10:25:24 | Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
| chameleonTK | https://github.com/huggingface/datasets/pull/1737 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1737",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"merged_at": "2021-01-14T10:25... | true |
785,433,854 | 1,736 | Adjust BrWaC dataset features name | closed | [] | 2021-01-13T20:39:04 | 2021-01-14T10:29:38 | 2021-01-14T10:29:38 | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr... | jonatasgrosman | https://github.com/huggingface/datasets/pull/1736 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1736",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"merged_at": "2021-01-14T10:29... | true |
785,184,740 | 1,735 | Update add new dataset template | closed | [] | 2021-01-13T15:08:09 | 2021-01-14T15:16:01 | 2021-01-14T15:16:00 | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | sgugger | https://github.com/huggingface/datasets/pull/1735 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"merged_at": "2021-01-14T15:16... | true |
784,956,707 | 1,734 | Fix empty token bug for `thainer` and `lst20` | closed | [] | 2021-01-13T09:55:09 | 2021-01-14T10:42:18 | 2021-01-14T10:42:18 | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | cstorm125 | https://github.com/huggingface/datasets/pull/1734 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1734",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"merged_at": "2021-01-14T10:42... | true |
784,903,002 | 1,733 | connection issue with glue, what is the data url for glue? | closed | [] | 2021-01-13T08:37:40 | 2021-08-04T18:13:55 | 2021-08-04T18:13:55 | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | ghost | https://github.com/huggingface/datasets/issues/1733 | null | false |
784,874,490 | 1,732 | [GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification. | closed | [] | 2021-01-13T07:50:19 | 2021-01-14T10:19:41 | 2021-01-14T10:19:41 | We want to use TurkCorpus for validation and testing of the sentence simplification task. | mounicam | https://github.com/huggingface/datasets/pull/1732 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1732",
"html_url": "https://github.com/huggingface/datasets/pull/1732",
"diff_url": "https://github.com/huggingface/datasets/pull/1732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1732.patch",
"merged_at": "2021-01-14T10:19... | true |
784,744,674 | 1,731 | Couldn't reach swda.py | closed | [] | 2021-01-13T02:57:40 | 2021-01-13T11:17:40 | 2021-01-13T11:17:40 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| yangp725 | https://github.com/huggingface/datasets/issues/1731 | null | false |
784,617,525 | 1,730 | Add MNIST dataset | closed | [] | 2021-01-12T21:48:02 | 2021-01-13T10:19:47 | 2021-01-13T10:19:46 | This PR adds the MNIST dataset to the library. | sgugger | https://github.com/huggingface/datasets/pull/1730 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1730",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"merged_at": "2021-01-13T10:19... | true |
784,565,898 | 1,729 | Is there support for Deep learning datasets? | closed | [] | 2021-01-12T20:22:41 | 2021-03-31T04:24:07 | 2021-03-31T04:24:07 | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | pablodz | https://github.com/huggingface/datasets/issues/1729 | null | false |
784,458,342 | 1,728 | Add an entry to an arrow dataset | closed | [] | 2021-01-12T18:01:47 | 2021-01-18T19:15:32 | 2021-01-18T19:15:32 | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | ameet-1997 | https://github.com/huggingface/datasets/issues/1728 | null | false |
784,435,131 | 1,727 | BLEURT score calculation raises UnrecognizedFlagError | closed | [] | 2021-01-12T17:27:02 | 2022-06-01T16:06:02 | 2022-06-01T16:06:02 | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | nadavo | https://github.com/huggingface/datasets/issues/1727 | null | false |
784,336,370 | 1,726 | Offline loading | closed | [] | 2021-01-12T15:21:57 | 2022-02-15T10:32:10 | 2021-01-19T16:42:32 | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest vers... | lhoestq | https://github.com/huggingface/datasets/pull/1726 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1726",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"merged_at": "2021-01-19T16:42... | true |
784,182,273 | 1,725 | load the local dataset | closed | [] | 2021-01-12T12:12:55 | 2022-06-01T16:00:59 | 2022-06-01T16:00:59 | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | xinjicong | https://github.com/huggingface/datasets/issues/1725 | null | false |
783,982,100 | 1,723 | ADD S3 support for downloading and uploading processed datasets | closed | [] | 2021-01-12T07:17:34 | 2021-01-26T17:02:08 | 2021-01-26T17:02:08 | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Lo... | philschmid | https://github.com/huggingface/datasets/pull/1723 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"merged_at": "2021-01-26T17:02... | true |
784,023,338 | 1,724 | could not run models on a offline server successfully | closed | [] | 2021-01-12T06:08:06 | 2022-10-05T12:39:07 | 2022-10-05T12:39:07 | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
, the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | versae | https://github.com/huggingface/datasets/pull/1720 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1720",
"html_url": "https://github.com/huggingface/datasets/pull/1720",
"diff_url": "https://github.com/huggingface/datasets/pull/1720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1720.patch",
"merged_at": null
} | true |
783,557,542 | 1,719 | Fix column list comparison in transmit format | closed | [] | 2021-01-11T17:23:56 | 2021-01-11T18:45:03 | 2021-01-11T18:45:02 | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col... | lhoestq | https://github.com/huggingface/datasets/pull/1719 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1719",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"merged_at": "2021-01-11T18:45... | true |
783,474,753 | 1,718 | Possible cache miss in datasets | closed | [] | 2021-01-11T15:37:31 | 2022-06-29T14:54:42 | 2021-01-26T02:47:59 | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | ofirzaf | https://github.com/huggingface/datasets/issues/1718 | null | false |
783,074,255 | 1,717 | SciFact dataset - minor changes | closed | [] | 2021-01-11T05:26:40 | 2021-01-26T02:52:17 | 2021-01-26T02:52:17 | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | dwadden | https://github.com/huggingface/datasets/issues/1717 | null | false |
782,819,006 | 1,716 | Add Hatexplain Dataset | closed | [] | 2021-01-10T13:30:01 | 2021-01-18T14:21:42 | 2021-01-18T14:21:42 | Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue | kushal2000 | https://github.com/huggingface/datasets/pull/1716 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1716",
"html_url": "https://github.com/huggingface/datasets/pull/1716",
"diff_url": "https://github.com/huggingface/datasets/pull/1716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1716.patch",
"merged_at": "2021-01-18T14:21... | true |
782,754,441 | 1,715 | add Korean intonation-aided intention identification dataset | closed | [] | 2021-01-10T06:29:04 | 2021-09-17T16:54:13 | 2021-01-12T17:14:33 | stevhliu | https://github.com/huggingface/datasets/pull/1715 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1715",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"merged_at": "2021-01-12T17:14... | true | |
782,416,276 | 1,714 | Adding adversarialQA dataset | closed | [] | 2021-01-08T21:46:09 | 2021-01-13T16:05:24 | 2021-01-13T16:05:24 | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | maxbartolo | https://github.com/huggingface/datasets/pull/1714 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1714",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"merged_at": "2021-01-13T16:05... | true |
782,337,723 | 1,713 | Installation using conda | closed | [] | 2021-01-08T19:12:15 | 2021-09-17T12:47:40 | 2021-09-17T12:47:40 | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | pranav-s | https://github.com/huggingface/datasets/issues/1713 | null | false |
782,313,097 | 1,712 | Silicone | closed | [] | 2021-01-08T18:24:18 | 2021-01-21T14:12:37 | 2021-01-21T10:31:11 | My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication. | eusip | https://github.com/huggingface/datasets/pull/1712 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1712",
"html_url": "https://github.com/huggingface/datasets/pull/1712",
"diff_url": "https://github.com/huggingface/datasets/pull/1712.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1712.patch",
"merged_at": null
} | true |
782,129,083 | 1,711 | Fix windows path scheme in cached path | closed | [] | 2021-01-08T13:45:56 | 2021-01-11T09:23:20 | 2021-01-11T09:23:19 | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | lhoestq | https://github.com/huggingface/datasets/pull/1711 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"merged_at": "2021-01-11T09:23... | true |
781,914,951 | 1,710 | IsADirectoryError when trying to download C4 | closed | [] | 2021-01-08T07:31:30 | 2022-08-04T11:56:10 | 2022-08-04T11:55:04 | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | fredriko | https://github.com/huggingface/datasets/issues/1710 | null | false |
781,875,640 | 1,709 | Databases | closed | [] | 2021-01-08T06:14:03 | 2021-01-08T09:00:08 | 2021-01-08T09:00:08 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | JimmyJim1 | https://github.com/huggingface/datasets/issues/1709 | null | false |
781,631,455 | 1,708 | <html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | closed | [] | 2021-01-07T21:45:24 | 2021-01-08T09:00:01 | 2021-01-08T09:00:01 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | Louiejay54 | https://github.com/huggingface/datasets/issues/1708 | null | false |
781,507,545 | 1,707 | Added generated READMEs for datasets that were missing one. | closed | [] | 2021-01-07T18:10:06 | 2021-01-18T14:32:33 | 2021-01-18T14:32:33 | This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible.
Code is available here for the moment: https://github.com/madlag/datasets... | madlag | https://github.com/huggingface/datasets/pull/1707 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1707",
"html_url": "https://github.com/huggingface/datasets/pull/1707",
"diff_url": "https://github.com/huggingface/datasets/pull/1707.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1707.patch",
"merged_at": "2021-01-18T14:32... | true |
781,494,476 | 1,706 | Error when downloading a large dataset on slow connection. | open | [] | 2021-01-07T17:48:15 | 2021-01-13T10:35:02 | null | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | lucadiliello | https://github.com/huggingface/datasets/issues/1706 | null | false |
781,474,949 | 1,705 | Add information about caching and verifications in "Load a Dataset" docs | closed | [] | 2021-01-07T17:18:44 | 2021-01-12T14:08:01 | 2021-01-12T14:08:01 | Related to #215.
Missing improvements from @lhoestq's #1703. | SBrandeis | https://github.com/huggingface/datasets/pull/1705 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1705",
"html_url": "https://github.com/huggingface/datasets/pull/1705",
"diff_url": "https://github.com/huggingface/datasets/pull/1705.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1705.patch",
"merged_at": "2021-01-12T14:08... | true |
781,402,757 | 1,704 | Update XSUM Factuality DatasetCard | closed | [] | 2021-01-07T15:37:14 | 2021-01-12T13:30:04 | 2021-01-12T13:30:04 | Update XSUM Factuality DatasetCard | vineeths96 | https://github.com/huggingface/datasets/pull/1704 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1704",
"html_url": "https://github.com/huggingface/datasets/pull/1704",
"diff_url": "https://github.com/huggingface/datasets/pull/1704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1704.patch",
"merged_at": "2021-01-12T13:30... | true |
781,395,146 | 1,703 | Improvements regarding caching and fingerprinting | closed | [] | 2021-01-07T15:26:29 | 2021-01-19T17:32:11 | 2021-01-19T17:32:10 | This PR adds these features:
- Enable/disable caching
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
It is equivalent to setting `load_from_cache` to `False` in dataset transforms.
```python
from datasets import set_caching_enabled
set_cach... | lhoestq | https://github.com/huggingface/datasets/pull/1703 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1703",
"html_url": "https://github.com/huggingface/datasets/pull/1703",
"diff_url": "https://github.com/huggingface/datasets/pull/1703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1703.patch",
"merged_at": "2021-01-19T17:32... | true |
781,383,277 | 1,702 | Fix importlib metdata import in py38 | closed | [] | 2021-01-07T15:10:30 | 2021-01-08T10:47:15 | 2021-01-08T10:47:15 | In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib. | lhoestq | https://github.com/huggingface/datasets/pull/1702 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1702",
"html_url": "https://github.com/huggingface/datasets/pull/1702",
"diff_url": "https://github.com/huggingface/datasets/pull/1702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1702.patch",
"merged_at": "2021-01-08T10:47... | true |
781,345,717 | 1,701 | Some datasets miss dataset_infos.json or dummy_data.zip | closed | [] | 2021-01-07T14:17:13 | 2022-11-04T15:11:16 | 2022-11-04T15:06:00 | While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qanga... | madlag | https://github.com/huggingface/datasets/issues/1701 | null | false |
781,333,589 | 1,700 | Update Curiosity dialogs DatasetCard | closed | [] | 2021-01-07T13:59:27 | 2021-01-12T18:51:32 | 2021-01-12T18:51:32 | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | vineeths96 | https://github.com/huggingface/datasets/pull/1700 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"merged_at": "2021-01-12T18:51... | true |
781,271,558 | 1,699 | Update DBRD dataset card and download URL | closed | [] | 2021-01-07T12:16:43 | 2021-01-07T13:41:39 | 2021-01-07T13:40:59 | I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:
1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.
2. I've updated the dataset card.
Cheers! 😄 | benjaminvdb | https://github.com/huggingface/datasets/pull/1699 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1699",
"html_url": "https://github.com/huggingface/datasets/pull/1699",
"diff_url": "https://github.com/huggingface/datasets/pull/1699.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1699.patch",
"merged_at": "2021-01-07T13:40... | true |
781,152,561 | 1,698 | Update Coached Conv Pref DatasetCard | closed | [] | 2021-01-07T09:07:16 | 2021-01-08T17:04:33 | 2021-01-08T17:04:32 | Update Coached Conversation Preferance DatasetCard | vineeths96 | https://github.com/huggingface/datasets/pull/1698 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1698",
"html_url": "https://github.com/huggingface/datasets/pull/1698",
"diff_url": "https://github.com/huggingface/datasets/pull/1698.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1698.patch",
"merged_at": "2021-01-08T17:04... | true |
781,126,579 | 1,697 | Update DialogRE DatasetCard | closed | [] | 2021-01-07T08:22:33 | 2021-01-07T13:34:28 | 2021-01-07T13:34:28 | Update the information in the dataset card for the Dialog RE dataset. | vineeths96 | https://github.com/huggingface/datasets/pull/1697 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"merged_at": "2021-01-07T13:34... | true |
781,096,918 | 1,696 | Unable to install datasets | closed | [] | 2021-01-07T07:24:37 | 2021-01-08T00:33:05 | 2021-01-07T22:06:05 | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev... | glee2429 | https://github.com/huggingface/datasets/issues/1696 | null | false |
780,971,987 | 1,695 | fix ner_tag bugs in thainer | closed | [] | 2021-01-07T02:12:33 | 2021-01-07T14:43:45 | 2021-01-07T14:43:28 | fix bug that results in `ner_tag` always equal to 'O'. | cstorm125 | https://github.com/huggingface/datasets/pull/1695 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1695",
"html_url": "https://github.com/huggingface/datasets/pull/1695",
"diff_url": "https://github.com/huggingface/datasets/pull/1695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1695.patch",
"merged_at": "2021-01-07T14:43... | true |
780,429,080 | 1,694 | Add OSCAR | closed | [] | 2021-01-06T10:21:08 | 2021-01-25T09:10:33 | 2021-01-25T09:10:32 | Continuation of #348
The files have been moved to S3 and only the unshuffled version is available.
Both original and deduplicated versions of each language are available.
Example of usage:
```python
from datasets import load_dataset
oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="... | lhoestq | https://github.com/huggingface/datasets/pull/1694 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1694",
"html_url": "https://github.com/huggingface/datasets/pull/1694",
"diff_url": "https://github.com/huggingface/datasets/pull/1694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1694.patch",
"merged_at": "2021-01-25T09:10... | true |
780,268,595 | 1,693 | Fix reuters metadata parsing errors | closed | [] | 2021-01-06T08:26:03 | 2021-01-07T23:53:47 | 2021-01-07T14:01:22 | Was missing the last entry in each metadata category | jbragg | https://github.com/huggingface/datasets/pull/1693 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"merged_at": "2021-01-07T14:01... | true |
779,882,271 | 1,691 | Updated HuggingFace Datasets README (fix typos) | closed | [] | 2021-01-06T02:14:38 | 2021-01-16T23:30:47 | 2021-01-07T10:06:32 | Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps.

| 8bitmp3 | https://github.com/huggingface/datasets/pull/1691 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1691",
"html_url": "https://github.com/huggingface/datasets/pull/1691",
"diff_url": "https://github.com/huggingface/datasets/pull/1691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1691.patch",
"merged_at": "2021-01-07T10:06... | true |
779,441,631 | 1,690 | Fast start up | closed | [] | 2021-01-05T19:07:53 | 2021-01-06T14:20:59 | 2021-01-06T14:20:58 | Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies.
To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ... | lhoestq | https://github.com/huggingface/datasets/pull/1690 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1690",
"html_url": "https://github.com/huggingface/datasets/pull/1690",
"diff_url": "https://github.com/huggingface/datasets/pull/1690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1690.patch",
"merged_at": "2021-01-06T14:20... | true |
779,107,313 | 1,689 | Fix ade_corpus_v2 config names | closed | [] | 2021-01-05T14:33:28 | 2021-01-05T14:55:09 | 2021-01-05T14:55:08 | There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them:
- Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification
- Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation
- Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation | lhoestq | https://github.com/huggingface/datasets/pull/1689 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1689",
"html_url": "https://github.com/huggingface/datasets/pull/1689",
"diff_url": "https://github.com/huggingface/datasets/pull/1689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1689.patch",
"merged_at": "2021-01-05T14:55... | true |
779,029,685 | 1,688 | Fix DaNE last example | closed | [] | 2021-01-05T13:29:37 | 2021-01-05T14:00:15 | 2021-01-05T14:00:13 | The last example from the DaNE dataset is empty.
Fix #1686 | lhoestq | https://github.com/huggingface/datasets/pull/1688 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1688",
"html_url": "https://github.com/huggingface/datasets/pull/1688",
"diff_url": "https://github.com/huggingface/datasets/pull/1688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1688.patch",
"merged_at": "2021-01-05T14:00... | true |
779,004,894 | 1,687 | Question: Shouldn't .info be a part of DatasetDict? | open | [] | 2021-01-05T13:08:41 | 2021-01-07T10:18:06 | null | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most rece... | KennethEnevoldsen | https://github.com/huggingface/datasets/issues/1687 | null | false |
778,921,684 | 1,686 | Dataset Error: DaNE contains empty samples at the end | closed | [] | 2021-01-05T11:54:26 | 2021-01-05T14:01:09 | 2021-01-05T14:00:13 | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': ... | KennethEnevoldsen | https://github.com/huggingface/datasets/issues/1686 | null | false |
778,914,431 | 1,685 | Update README.md of covid-tweets-japanese | closed | [] | 2021-01-05T11:47:27 | 2021-01-06T10:27:12 | 2021-01-06T09:31:10 | Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402.
- Update "Data Splits" to be more precise that no information is provided for now.
- old: [More Information Needed]
- new: No information about data spl... | forest1988 | https://github.com/huggingface/datasets/pull/1685 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1685",
"html_url": "https://github.com/huggingface/datasets/pull/1685",
"diff_url": "https://github.com/huggingface/datasets/pull/1685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1685.patch",
"merged_at": "2021-01-06T09:31... | true |
778,356,196 | 1,684 | Add CANER Corpus | closed | [] | 2021-01-04T20:49:11 | 2021-01-25T09:09:20 | 2021-01-25T09:09:20 | What does this PR do?
Adds the following dataset:
https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus
Who can review?
@lhoestq | KMFODA | https://github.com/huggingface/datasets/pull/1684 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1684",
"html_url": "https://github.com/huggingface/datasets/pull/1684",
"diff_url": "https://github.com/huggingface/datasets/pull/1684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1684.patch",
"merged_at": "2021-01-25T09:09... | true |
778,287,612 | 1,683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | closed | [] | 2021-01-04T18:47:53 | 2021-01-04T19:04:45 | 2021-01-04T19:04:45 | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRCon... | abarbosa94 | https://github.com/huggingface/datasets/issues/1683 | null | false |
778,268,156 | 1,682 | Don't use xlrd for xlsx files | closed | [] | 2021-01-04T18:11:50 | 2021-01-04T18:13:14 | 2021-01-04T18:13:13 | Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.
Therefore we needed to use something else.
A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.
I left the unused import of `openpyxl` in the dataset scripts to show users that ... | lhoestq | https://github.com/huggingface/datasets/pull/1682 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1682",
"html_url": "https://github.com/huggingface/datasets/pull/1682",
"diff_url": "https://github.com/huggingface/datasets/pull/1682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1682.patch",
"merged_at": "2021-01-04T18:13... | true |
777,644,163 | 1,681 | Dataset "dane" missing | closed | [] | 2021-01-03T14:03:03 | 2021-01-05T08:35:35 | 2021-01-05T08:35:13 | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load... | KennethEnevoldsen | https://github.com/huggingface/datasets/issues/1681 | null | false |
777,623,053 | 1,680 | added TurkishProductReviews dataset | closed | [] | 2021-01-03T11:52:59 | 2021-01-04T18:15:35 | 2021-01-04T18:15:35 | This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**.
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** Fatih Barmanbay - @fthbrmnby | basakbuluz | https://github.com/huggingface/datasets/pull/1680 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1680",
"html_url": "https://github.com/huggingface/datasets/pull/1680",
"diff_url": "https://github.com/huggingface/datasets/pull/1680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1680.patch",
"merged_at": "2021-01-04T18:15... | true |
777,587,792 | 1,679 | Can't import cc100 dataset | closed | [] | 2021-01-03T07:12:56 | 2022-10-05T12:42:25 | 2022-10-05T12:42:25 | There is some issue to import cc100 dataset.
```
from datasets import load_dataset
dataset = load_dataset("cc100")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py
During handling of the above exception, another exception occur... | alighofrani95 | https://github.com/huggingface/datasets/issues/1679 | null | false |
777,567,920 | 1,678 | Switchboard Dialog Act Corpus added under `datasets/swda` | closed | [] | 2021-01-03T03:53:41 | 2021-01-08T18:09:21 | 2021-01-05T10:06:35 | Switchboard Dialog Act Corpus
Intro:
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC ... | gmihaila | https://github.com/huggingface/datasets/pull/1678 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1678",
"html_url": "https://github.com/huggingface/datasets/pull/1678",
"diff_url": "https://github.com/huggingface/datasets/pull/1678.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1678.patch",
"merged_at": "2021-01-05T10:06... | true |
777,553,383 | 1,677 | Switchboard Dialog Act Corpus added under `datasets/swda` | closed | [] | 2021-01-03T01:16:42 | 2021-01-03T02:55:57 | 2021-01-03T02:55:56 | Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**.
I think this is an important datasets to be added since it is the only one related to dialogue act classification.
Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form.
The Swi... | gmihaila | https://github.com/huggingface/datasets/pull/1677 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1677",
"html_url": "https://github.com/huggingface/datasets/pull/1677",
"diff_url": "https://github.com/huggingface/datasets/pull/1677.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1677.patch",
"merged_at": null
} | true |
777,477,645 | 1,676 | new version of Ted Talks IWSLT (WIT3) | closed | [] | 2021-01-02T15:30:03 | 2021-01-14T10:10:19 | 2021-01-14T10:10:19 | In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!!
Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually.
Locally I was able to clear the `python dataset... | skyprince999 | https://github.com/huggingface/datasets/pull/1676 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1676",
"html_url": "https://github.com/huggingface/datasets/pull/1676",
"diff_url": "https://github.com/huggingface/datasets/pull/1676.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1676.patch",
"merged_at": "2021-01-14T10:10... | true |
777,367,320 | 1,675 | Add the 800GB Pile dataset? | closed | [] | 2021-01-01T22:58:12 | 2021-12-01T15:29:07 | 2021-12-01T15:29:07 | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | lewtun | https://github.com/huggingface/datasets/issues/1675 | null | false |
777,321,840 | 1,674 | dutch_social can't be loaded | closed | [] | 2021-01-01T17:37:08 | 2022-10-05T13:03:26 | 2022-10-05T13:03:26 | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | koenvandenberge | https://github.com/huggingface/datasets/issues/1674 | null | false |
777,263,651 | 1,673 | Unable to Download Hindi Wikipedia Dataset | closed | [] | 2021-01-01T10:52:53 | 2021-01-05T10:22:12 | 2021-01-05T10:22:12 | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | aditya3498 | https://github.com/huggingface/datasets/issues/1673 | null | false |
777,258,941 | 1,672 | load_dataset hang on file_lock | closed | [] | 2021-01-01T10:25:07 | 2021-03-31T16:24:13 | 2021-01-01T11:47:36 | I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab.
Transformers: 3.3.1
Datasets: 1.0.2
Windows 10 (also tested in WSL)
```
datasets.logging.set_verbosity_debug()
datasets.
train_dataset = load_dataset('squad', split='train')
valid_dataset = load_dataset('squad', split='validat... | tomacai | https://github.com/huggingface/datasets/issues/1672 | null | false |
776,652,193 | 1,671 | connection issue | closed | [] | 2020-12-30T21:56:20 | 2022-10-05T12:42:12 | 2022-10-05T12:42:12 | Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r... | rabeehkarimimahabadi | https://github.com/huggingface/datasets/issues/1671 | null | false |
776,608,579 | 1,670 | wiki_dpr pre-processing performance | open | [] | 2020-12-30T19:41:43 | 2021-01-28T09:41:36 | null | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | dbarnhart | https://github.com/huggingface/datasets/issues/1670 | null | false |
776,608,386 | 1,669 | wiki_dpr dataset pre-processesing performance | closed | [] | 2020-12-30T19:41:09 | 2020-12-30T19:42:25 | 2020-12-30T19:42:25 | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | dbarnhart | https://github.com/huggingface/datasets/issues/1669 | null | false |
776,552,854 | 1,668 | xed_en_fi dataset Cleanup | closed | [] | 2020-12-30T17:11:18 | 2020-12-30T17:22:44 | 2020-12-30T17:22:43 | Fix ClassLabel feature type and minor mistakes in the dataset card | lhoestq | https://github.com/huggingface/datasets/pull/1668 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1668",
"html_url": "https://github.com/huggingface/datasets/pull/1668",
"diff_url": "https://github.com/huggingface/datasets/pull/1668.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1668.patch",
"merged_at": "2020-12-30T17:22... | true |
776,446,658 | 1,667 | Fix NER metric example in Overview notebook | closed | [] | 2020-12-30T13:05:19 | 2020-12-31T01:12:08 | 2020-12-30T17:21:51 | Fix errors in `NER metric example` section in `Overview.ipynb`.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-37-ee559b166e25> in <module>()
----> 1 ner_metric = load_metric('seqeval')
... | jungwhank | https://github.com/huggingface/datasets/pull/1667 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1667",
"html_url": "https://github.com/huggingface/datasets/pull/1667",
"diff_url": "https://github.com/huggingface/datasets/pull/1667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1667.patch",
"merged_at": "2020-12-30T17:21... | true |
776,432,006 | 1,666 | Add language to dataset card for Makhzan dataset. | closed | [] | 2020-12-30T12:25:52 | 2020-12-30T17:20:35 | 2020-12-30T17:20:35 | Add language to dataset card. | arkhalid | https://github.com/huggingface/datasets/pull/1666 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1666",
"html_url": "https://github.com/huggingface/datasets/pull/1666",
"diff_url": "https://github.com/huggingface/datasets/pull/1666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1666.patch",
"merged_at": "2020-12-30T17:20... | true |
776,431,087 | 1,665 | Add language to dataset card for Counter dataset. | closed | [] | 2020-12-30T12:23:20 | 2020-12-30T17:20:20 | 2020-12-30T17:20:20 | Add language. | arkhalid | https://github.com/huggingface/datasets/pull/1665 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1665",
"html_url": "https://github.com/huggingface/datasets/pull/1665",
"diff_url": "https://github.com/huggingface/datasets/pull/1665.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1665.patch",
"merged_at": "2020-12-30T17:20... | true |
775,956,441 | 1,664 | removed \n in labels | closed | [] | 2020-12-29T15:41:43 | 2020-12-30T17:18:49 | 2020-12-30T17:18:49 | updated social_i_qa labels as per #1633 | bhavitvyamalik | https://github.com/huggingface/datasets/pull/1664 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1664",
"html_url": "https://github.com/huggingface/datasets/pull/1664",
"diff_url": "https://github.com/huggingface/datasets/pull/1664.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1664.patch",
"merged_at": "2020-12-30T17:18... | true |
775,914,320 | 1,663 | update saving and loading methods for faiss index so to accept path l… | closed | [] | 2020-12-29T14:15:37 | 2021-01-18T09:27:23 | 2021-01-18T09:27:23 | - Update saving and loading methods for faiss index so to accept path like objects from pathlib
The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes bec... | tslott | https://github.com/huggingface/datasets/pull/1663 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1663",
"html_url": "https://github.com/huggingface/datasets/pull/1663",
"diff_url": "https://github.com/huggingface/datasets/pull/1663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1663.patch",
"merged_at": "2021-01-18T09:27... | true |
775,890,154 | 1,662 | Arrow file is too large when saving vector data | closed | [] | 2020-12-29T13:23:12 | 2021-01-21T14:12:39 | 2021-01-21T14:12:39 | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | weiwangorg | https://github.com/huggingface/datasets/issues/1662 | null | false |
775,840,801 | 1,661 | updated dataset cards | closed | [] | 2020-12-29T11:20:40 | 2020-12-30T17:15:16 | 2020-12-30T17:15:16 | added dataset instance in the card. | Nilanshrajput | https://github.com/huggingface/datasets/pull/1661 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1661",
"html_url": "https://github.com/huggingface/datasets/pull/1661",
"diff_url": "https://github.com/huggingface/datasets/pull/1661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1661.patch",
"merged_at": "2020-12-30T17:15... | true |
775,831,423 | 1,660 | add dataset info | closed | [] | 2020-12-29T10:58:19 | 2020-12-30T17:04:30 | 2020-12-30T17:04:30 | harshalmittal4 | https://github.com/huggingface/datasets/pull/1660 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1660",
"html_url": "https://github.com/huggingface/datasets/pull/1660",
"diff_url": "https://github.com/huggingface/datasets/pull/1660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1660.patch",
"merged_at": "2020-12-30T17:04... | true | |
775,831,288 | 1,659 | update dataset info | closed | [] | 2020-12-29T10:58:01 | 2020-12-30T16:55:07 | 2020-12-30T16:55:07 | harshalmittal4 | https://github.com/huggingface/datasets/pull/1659 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1659",
"html_url": "https://github.com/huggingface/datasets/pull/1659",
"diff_url": "https://github.com/huggingface/datasets/pull/1659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1659.patch",
"merged_at": "2020-12-30T16:55... | true | |
775,651,085 | 1,658 | brwac dataset: add instances and data splits info | closed | [] | 2020-12-29T01:24:45 | 2020-12-30T16:54:26 | 2020-12-30T16:54:26 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1658 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1658",
"html_url": "https://github.com/huggingface/datasets/pull/1658",
"diff_url": "https://github.com/huggingface/datasets/pull/1658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1658.patch",
"merged_at": "2020-12-30T16:54... | true | |
775,647,000 | 1,657 | mac_morpho dataset: add data splits info | closed | [] | 2020-12-29T01:05:21 | 2020-12-30T16:51:24 | 2020-12-30T16:51:24 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1657 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1657",
"html_url": "https://github.com/huggingface/datasets/pull/1657",
"diff_url": "https://github.com/huggingface/datasets/pull/1657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1657.patch",
"merged_at": "2020-12-30T16:51... | true | |
775,645,356 | 1,656 | assin 2 dataset: add instances and data splits info | closed | [] | 2020-12-29T00:57:51 | 2020-12-30T16:50:56 | 2020-12-30T16:50:56 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1656 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1656",
"html_url": "https://github.com/huggingface/datasets/pull/1656",
"diff_url": "https://github.com/huggingface/datasets/pull/1656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1656.patch",
"merged_at": "2020-12-30T16:50... | true | |
775,643,418 | 1,655 | assin dataset: add instances and data splits info | closed | [] | 2020-12-29T00:47:56 | 2020-12-30T16:50:23 | 2020-12-30T16:50:23 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1655 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1655",
"html_url": "https://github.com/huggingface/datasets/pull/1655",
"diff_url": "https://github.com/huggingface/datasets/pull/1655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1655.patch",
"merged_at": "2020-12-30T16:50... | true | |
775,640,729 | 1,654 | lener_br dataset: add instances and data splits info | closed | [] | 2020-12-29T00:35:12 | 2020-12-30T16:49:32 | 2020-12-30T16:49:32 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1654 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1654",
"html_url": "https://github.com/huggingface/datasets/pull/1654",
"diff_url": "https://github.com/huggingface/datasets/pull/1654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1654.patch",
"merged_at": "2020-12-30T16:49... | true | |
775,632,945 | 1,653 | harem dataset: add data splits info | closed | [] | 2020-12-28T23:58:20 | 2020-12-30T16:49:03 | 2020-12-30T16:49:03 | jonatasgrosman | https://github.com/huggingface/datasets/pull/1653 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1653",
"html_url": "https://github.com/huggingface/datasets/pull/1653",
"diff_url": "https://github.com/huggingface/datasets/pull/1653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1653.patch",
"merged_at": "2020-12-30T16:49... | true | |
775,571,813 | 1,652 | Update dataset cards from previous sprint | closed | [] | 2020-12-28T20:20:47 | 2020-12-30T16:48:04 | 2020-12-30T16:48:04 | This PR updates the dataset cards/readmes for the 4 approved PRs I submitted in the previous sprint. | j-chim | https://github.com/huggingface/datasets/pull/1652 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1652",
"html_url": "https://github.com/huggingface/datasets/pull/1652",
"diff_url": "https://github.com/huggingface/datasets/pull/1652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1652.patch",
"merged_at": "2020-12-30T16:48... | true |
775,554,319 | 1,651 | Add twi wordsim353 | closed | [] | 2020-12-28T19:31:55 | 2021-01-04T09:39:39 | 2021-01-04T09:39:38 | Added the citation information to the README file | dadelani | https://github.com/huggingface/datasets/pull/1651 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1651",
"html_url": "https://github.com/huggingface/datasets/pull/1651",
"diff_url": "https://github.com/huggingface/datasets/pull/1651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1651.patch",
"merged_at": "2021-01-04T09:39... | true |
775,545,912 | 1,650 | Update README.md | closed | [] | 2020-12-28T19:09:05 | 2020-12-29T10:43:14 | 2020-12-29T10:43:14 | added dataset summary | MisbahKhan789 | https://github.com/huggingface/datasets/pull/1650 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1650",
"html_url": "https://github.com/huggingface/datasets/pull/1650",
"diff_url": "https://github.com/huggingface/datasets/pull/1650.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1650.patch",
"merged_at": "2020-12-29T10:43... | true |
775,544,487 | 1,649 | Update README.md | closed | [] | 2020-12-28T19:05:00 | 2020-12-29T10:50:58 | 2020-12-29T10:43:03 | Added information in the dataset card | MisbahKhan789 | https://github.com/huggingface/datasets/pull/1649 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1649",
"html_url": "https://github.com/huggingface/datasets/pull/1649",
"diff_url": "https://github.com/huggingface/datasets/pull/1649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1649.patch",
"merged_at": "2020-12-29T10:43... | true |
775,542,360 | 1,648 | Update README.md | closed | [] | 2020-12-28T18:59:06 | 2020-12-29T10:39:14 | 2020-12-29T10:39:14 | added dataset summary | MisbahKhan789 | https://github.com/huggingface/datasets/pull/1648 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1648",
"html_url": "https://github.com/huggingface/datasets/pull/1648",
"diff_url": "https://github.com/huggingface/datasets/pull/1648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1648.patch",
"merged_at": "2020-12-29T10:39... | true |
775,525,799 | 1,647 | NarrativeQA fails to load with `load_dataset` | closed | [] | 2020-12-28T18:16:09 | 2021-01-05T12:05:08 | 2021-01-03T17:58:05 | When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://r... | eric-mitchell | https://github.com/huggingface/datasets/issues/1647 | null | false |
775,499,344 | 1,646 | Add missing homepage in some dataset cards | closed | [] | 2020-12-28T17:09:48 | 2021-01-04T14:08:57 | 2021-01-04T14:08:56 | In some dataset cards the homepage field in the `Dataset Description` section was missing/empty | lhoestq | https://github.com/huggingface/datasets/pull/1646 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1646",
"html_url": "https://github.com/huggingface/datasets/pull/1646",
"diff_url": "https://github.com/huggingface/datasets/pull/1646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1646.patch",
"merged_at": "2021-01-04T14:08... | true |
775,473,106 | 1,645 | Rename "part-of-speech-tagging" tag in some dataset cards | closed | [] | 2020-12-28T16:09:09 | 2021-01-07T10:08:14 | 2021-01-07T10:08:13 | `part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction` | lhoestq | https://github.com/huggingface/datasets/pull/1645 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1645",
"html_url": "https://github.com/huggingface/datasets/pull/1645",
"diff_url": "https://github.com/huggingface/datasets/pull/1645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1645.patch",
"merged_at": "2021-01-07T10:08... | true |
775,375,880 | 1,644 | HoVeR dataset fails to load | closed | [] | 2020-12-28T12:27:07 | 2022-10-05T12:40:34 | 2022-10-05T12:40:34 | Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.
Steps to reproduce the error:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("hover")
Traceback (most recent call last):
... | urikz | https://github.com/huggingface/datasets/issues/1644 | null | false |
775,280,046 | 1,643 | Dataset social_bias_frames 404 | closed | [] | 2020-12-28T08:35:34 | 2020-12-28T08:38:07 | 2020-12-28T08:38:07 | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("social_bias_frames")
...
Downloading and preparing dataset social_bias_frames/default
...
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, ... | atemate | https://github.com/huggingface/datasets/issues/1643 | null | false |
775,159,568 | 1,642 | Ollie dataset | closed | [] | 2020-12-28T02:43:37 | 2021-01-04T13:35:25 | 2021-01-04T13:35:24 | This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details. | huu4ontocord | https://github.com/huggingface/datasets/pull/1642 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1642",
"html_url": "https://github.com/huggingface/datasets/pull/1642",
"diff_url": "https://github.com/huggingface/datasets/pull/1642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1642.patch",
"merged_at": "2021-01-04T13:35... | true |
775,110,872 | 1,641 | muchocine dataset cannot be dowloaded | closed | [] | 2020-12-27T21:26:28 | 2021-08-03T05:07:29 | 2021-08-03T05:07:29 | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | mrm8488 | https://github.com/huggingface/datasets/issues/1641 | null | false |
774,921,836 | 1,640 | Fix "'BertTokenizerFast' object has no attribute 'max_len'" | closed | [] | 2020-12-26T19:25:41 | 2020-12-28T17:26:35 | 2020-12-28T17:26:35 | Tensorflow 2.3.0 gives:
FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
Tensorflow 2.4.0 gives:
AttributeError 'BertTokenizerFast' object has no attribute 'max_len' | mflis | https://github.com/huggingface/datasets/pull/1640 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1640",
"html_url": "https://github.com/huggingface/datasets/pull/1640",
"diff_url": "https://github.com/huggingface/datasets/pull/1640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1640.patch",
"merged_at": "2020-12-28T17:26... | true |
774,903,472 | 1,639 | bug with sst2 in glue | closed | [] | 2020-12-26T16:57:23 | 2022-10-05T12:40:16 | 2022-10-05T12:40:16 | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ... | ghost | https://github.com/huggingface/datasets/issues/1639 | null | false |
774,869,184 | 1,638 | Add id_puisi dataset | closed | [] | 2020-12-26T12:41:55 | 2020-12-30T16:34:17 | 2020-12-30T16:34:17 | Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :) | ilhamfp | https://github.com/huggingface/datasets/pull/1638 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1638",
"html_url": "https://github.com/huggingface/datasets/pull/1638",
"diff_url": "https://github.com/huggingface/datasets/pull/1638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1638.patch",
"merged_at": "2020-12-30T16:34... | true |
774,710,014 | 1,637 | Added `pn_summary` dataset | closed | [] | 2020-12-25T11:01:24 | 2021-01-04T13:43:19 | 2021-01-04T13:43:19 | #1635
You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)! | m3hrdadfi | https://github.com/huggingface/datasets/pull/1637 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1637",
"html_url": "https://github.com/huggingface/datasets/pull/1637",
"diff_url": "https://github.com/huggingface/datasets/pull/1637.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1637.patch",
"merged_at": "2021-01-04T13:43... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.