id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,194,579,257 | https://api.github.com/repos/huggingface/datasets/issues/4109 | https://github.com/huggingface/datasets/pull/4109 | 4,109 | Add Spearmanr Metric Card | closed | 3 | 2022-04-06T12:57:53 | 2022-05-03T16:50:26 | 2022-05-03T16:43:37 | emibaylor | [] | null | true |
1,194,578,584 | https://api.github.com/repos/huggingface/datasets/issues/4108 | https://github.com/huggingface/datasets/pull/4108 | 4,108 | Perplexity Speedup | closed | 7 | 2022-04-06T12:57:21 | 2022-04-20T13:00:54 | 2022-04-20T12:54:42 | emibaylor | [] | This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Ev... | true |
1,194,484,885 | https://api.github.com/repos/huggingface/datasets/issues/4107 | https://github.com/huggingface/datasets/issues/4107 | 4,107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | closed | 5 | 2022-04-06T11:37:15 | 2022-04-08T07:13:07 | 2022-04-06T14:39:55 | Pavithree | [
"bug"
] | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. How... | false |
1,194,393,892 | https://api.github.com/repos/huggingface/datasets/issues/4106 | https://github.com/huggingface/datasets/pull/4106 | 4,106 | Support huggingface_hub 0.5 | closed | 14 | 2022-04-06T10:15:25 | 2022-04-08T10:28:43 | 2022-04-08T10:22:23 | lhoestq | [] | Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik | true |
1,194,297,119 | https://api.github.com/repos/huggingface/datasets/issues/4105 | https://github.com/huggingface/datasets/issues/4105 | 4,105 | push to hub fails with huggingface-hub 0.5.0 | closed | 5 | 2022-04-06T08:59:57 | 2022-04-13T14:30:47 | 2022-04-13T14:30:47 | frascuchon | [
"bug"
] | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The data... | false |
1,194,072,966 | https://api.github.com/repos/huggingface/datasets/issues/4104 | https://github.com/huggingface/datasets/issues/4104 | 4,104 | Add time series data - stock market | open | 10 | 2022-04-06T05:46:58 | 2024-07-21T16:54:30 | null | rozeappletree | [
"dataset request"
] | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | false |
1,193,987,104 | https://api.github.com/repos/huggingface/datasets/issues/4103 | https://github.com/huggingface/datasets/pull/4103 | 4,103 | Add the `GSM8K` dataset | closed | 2 | 2022-04-06T04:07:52 | 2022-04-12T15:38:28 | 2022-04-12T10:21:16 | jon-tow | [] | null | true |
1,193,616,722 | https://api.github.com/repos/huggingface/datasets/issues/4102 | https://github.com/huggingface/datasets/pull/4102 | 4,102 | [hub] Fix `api.create_repo` call? | closed | 2 | 2022-04-05T19:21:52 | 2023-09-24T10:01:14 | 2022-04-12T08:41:46 | julien-c | [] | null | true |
1,193,399,204 | https://api.github.com/repos/huggingface/datasets/issues/4101 | https://github.com/huggingface/datasets/issues/4101 | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | open | 1 | 2022-04-05T16:00:15 | 2022-04-06T13:09:01 | null | Nakkhatra | [
"enhancement"
] | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | false |
1,193,393,959 | https://api.github.com/repos/huggingface/datasets/issues/4100 | https://github.com/huggingface/datasets/pull/4100 | 4,100 | Improve RedCaps dataset card | closed | 2 | 2022-04-05T15:57:14 | 2022-04-13T14:08:54 | 2022-04-13T14:02:26 | mariosasko | [] | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | true |
1,193,253,768 | https://api.github.com/repos/huggingface/datasets/issues/4099 | https://github.com/huggingface/datasets/issues/4099 | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | closed | 3 | 2022-04-05T14:42:38 | 2022-04-06T06:37:44 | 2022-04-06T06:35:54 | andreybond | [
"bug"
] | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | false |
1,193,245,522 | https://api.github.com/repos/huggingface/datasets/issues/4098 | https://github.com/huggingface/datasets/pull/4098 | 4,098 | Proposing WikiSplit metric card | closed | 3 | 2022-04-05T14:36:34 | 2022-10-11T09:10:21 | 2022-04-05T15:42:28 | sashavor | [] | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | true |
1,193,205,751 | https://api.github.com/repos/huggingface/datasets/issues/4097 | https://github.com/huggingface/datasets/pull/4097 | 4,097 | Updating FrugalScore metric card | closed | 1 | 2022-04-05T14:09:24 | 2022-04-05T15:07:35 | 2022-04-05T15:01:46 | sashavor | [] | removing duplicate paragraph | true |
1,193,165,229 | https://api.github.com/repos/huggingface/datasets/issues/4096 | https://github.com/huggingface/datasets/issues/4096 | 4,096 | Add support for streaming Zarr stores for hosted datasets | closed | 11 | 2022-04-05T13:38:32 | 2023-12-07T09:01:49 | 2022-04-21T08:12:58 | jacobbieker | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | false |
1,192,573,353 | https://api.github.com/repos/huggingface/datasets/issues/4095 | https://github.com/huggingface/datasets/pull/4095 | 4,095 | fix typo in rename_column error message | closed | 1 | 2022-04-05T03:55:56 | 2022-04-05T08:54:46 | 2022-04-05T08:45:53 | hunterlang | [] | I feel bad submitting such a tiny change as a PR but it confused me today 😄 | true |
1,192,534,414 | https://api.github.com/repos/huggingface/datasets/issues/4094 | https://github.com/huggingface/datasets/issues/4094 | 4,094 | Helo Mayfrends | closed | 0 | 2022-04-05T02:42:57 | 2022-04-05T07:16:42 | 2022-04-05T07:16:42 | Budigming | [
"dataset request"
] | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reas... | false |
1,192,523,161 | https://api.github.com/repos/huggingface/datasets/issues/4093 | https://github.com/huggingface/datasets/issues/4093 | 4,093 | elena-soare/crawled-ecommerce: missing dataset | closed | 3 | 2022-04-05T02:25:19 | 2022-04-12T09:34:53 | 2022-04-12T09:34:53 | seevaratnam | [
"dataset-viewer"
] | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| false |
1,192,499,903 | https://api.github.com/repos/huggingface/datasets/issues/4092 | https://github.com/huggingface/datasets/pull/4092 | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | closed | 2 | 2022-04-05T01:39:45 | 2022-04-08T12:35:41 | 2022-04-08T12:29:31 | trentonstrong | [] | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | true |
1,192,023,855 | https://api.github.com/repos/huggingface/datasets/issues/4091 | https://github.com/huggingface/datasets/issues/4091 | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | closed | 2 | 2022-04-04T16:19:24 | 2022-04-20T14:31:00 | 2022-04-20T14:31:00 | aravind-tonita | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I la... | false |
1,191,956,734 | https://api.github.com/repos/huggingface/datasets/issues/4090 | https://github.com/huggingface/datasets/pull/4090 | 4,090 | Avoid writing empty license files | closed | 1 | 2022-04-04T15:23:37 | 2022-04-07T12:46:45 | 2022-04-07T12:40:43 | albertvillanova | [] | This PR avoids the creation of empty `LICENSE` files. | true |
1,191,915,196 | https://api.github.com/repos/huggingface/datasets/issues/4089 | https://github.com/huggingface/datasets/pull/4089 | 4,089 | Create metric card for Frugal Score | closed | 1 | 2022-04-04T14:53:49 | 2022-04-05T14:14:46 | 2022-04-05T14:06:50 | sashavor | [] | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | true |
1,191,901,172 | https://api.github.com/repos/huggingface/datasets/issues/4088 | https://github.com/huggingface/datasets/pull/4088 | 4,088 | Remove unused legacy Beam utils | closed | 1 | 2022-04-04T14:43:51 | 2022-04-05T15:23:27 | 2022-04-05T15:17:41 | albertvillanova | [] | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | true |
1,191,819,805 | https://api.github.com/repos/huggingface/datasets/issues/4087 | https://github.com/huggingface/datasets/pull/4087 | 4,087 | Fix BeamWriter output Parquet file | closed | 1 | 2022-04-04T13:46:50 | 2022-04-05T15:00:40 | 2022-04-05T14:54:48 | albertvillanova | [] | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in... | true |
1,191,373,374 | https://api.github.com/repos/huggingface/datasets/issues/4086 | https://github.com/huggingface/datasets/issues/4086 | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | closed | 2 | 2022-04-04T07:27:20 | 2022-04-04T22:29:53 | 2022-04-04T08:01:45 | cslizc | [
"dataset-viewer"
] | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 4... | false |
1,190,621,345 | https://api.github.com/repos/huggingface/datasets/issues/4085 | https://github.com/huggingface/datasets/issues/4085 | 4,085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | closed | 3 | 2022-04-02T12:40:10 | 2022-09-17T02:18:03 | 2022-04-04T06:44:34 | virilo | [
"bug"
] | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'se... | false |
1,190,060,415 | https://api.github.com/repos/huggingface/datasets/issues/4084 | https://github.com/huggingface/datasets/issues/4084 | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | closed | 1 | 2022-04-01T17:02:47 | 2022-04-04T07:24:37 | 2022-04-04T07:21:31 | blackhat-coder | [
"bug"
] | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
impo... | false |
1,190,025,878 | https://api.github.com/repos/huggingface/datasets/issues/4083 | https://github.com/huggingface/datasets/pull/4083 | 4,083 | Add SacreBLEU Metric Card | closed | 1 | 2022-04-01T16:24:56 | 2022-04-12T20:45:00 | 2022-04-12T20:38:40 | emibaylor | [] | null | true |
1,189,965,845 | https://api.github.com/repos/huggingface/datasets/issues/4082 | https://github.com/huggingface/datasets/pull/4082 | 4,082 | Add chrF(++) Metric Card | closed | 1 | 2022-04-01T15:32:12 | 2022-04-12T20:43:55 | 2022-04-12T20:38:06 | emibaylor | [] | null | true |
1,189,916,472 | https://api.github.com/repos/huggingface/datasets/issues/4081 | https://github.com/huggingface/datasets/pull/4081 | 4,081 | Close parquet writer properly in `push_to_hub` | closed | 2 | 2022-04-01T14:58:50 | 2022-07-14T19:22:06 | 2022-04-01T16:16:19 | lhoestq | [] | We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer.
I fixed this by explicitly closing the parquet writer.
Close https://github.com/huggingface/datasets/issues/4077. | true |
1,189,667,296 | https://api.github.com/repos/huggingface/datasets/issues/4080 | https://github.com/huggingface/datasets/issues/4080 | 4,080 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset | closed | 1 | 2022-04-01T11:34:28 | 2022-04-01T13:59:10 | 2022-04-01T13:59:10 | richarddwang | [
"duplicate",
"dataset bug"
] | ## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s]
Downloading metadata: 20.0kB [00:00, 10... | false |
1,189,521,576 | https://api.github.com/repos/huggingface/datasets/issues/4079 | https://github.com/huggingface/datasets/pull/4079 | 4,079 | Increase max retries for GitHub datasets | closed | 1 | 2022-04-01T09:34:03 | 2022-04-01T15:32:40 | 2022-04-01T15:27:11 | albertvillanova | [] | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- ... | true |
1,189,513,572 | https://api.github.com/repos/huggingface/datasets/issues/4078 | https://github.com/huggingface/datasets/pull/4078 | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | closed | 1 | 2022-04-01T09:26:58 | 2022-04-01T14:44:51 | 2022-04-01T14:39:27 | albertvillanova | [] | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | true |
1,189,467,585 | https://api.github.com/repos/huggingface/datasets/issues/4077 | https://github.com/huggingface/datasets/issues/4077 | 4,077 | ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. | closed | 0 | 2022-04-01T08:49:13 | 2022-04-01T16:16:19 | 2022-04-01T16:16:19 | NielsRogge | [
"bug"
] | ## Describe the bug
When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine.
Basically, I do:
```
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_files="path_to_my_files")
dataset.push_to_hub("dat... | false |
1,188,478,867 | https://api.github.com/repos/huggingface/datasets/issues/4076 | https://github.com/huggingface/datasets/pull/4076 | 4,076 | Add ROUGE Metric Card | closed | 1 | 2022-03-31T18:34:34 | 2022-04-12T20:43:45 | 2022-04-12T20:37:38 | emibaylor | [] | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with hum... | true |
1,188,462,162 | https://api.github.com/repos/huggingface/datasets/issues/4075 | https://github.com/huggingface/datasets/issues/4075 | 4,075 | Add CCAgT dataset | closed | 4 | 2022-03-31T18:20:28 | 2022-07-06T19:03:42 | 2022-07-06T19:03:42 | johnnv1 | [
"dataset request",
"vision"
] | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample ... | false |
1,188,449,142 | https://api.github.com/repos/huggingface/datasets/issues/4074 | https://github.com/huggingface/datasets/issues/4074 | 4,074 | Error in google/xtreme_s dataset card | closed | 1 | 2022-03-31T18:07:45 | 2022-04-01T08:12:56 | 2022-04-01T08:12:56 | wranai | [
"documentation",
"dataset bug"
] | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| false |
1,188,364,711 | https://api.github.com/repos/huggingface/datasets/issues/4073 | https://github.com/huggingface/datasets/pull/4073 | 4,073 | Create a metric card for Competition MATH | closed | 1 | 2022-03-31T16:48:59 | 2022-04-01T19:02:39 | 2022-04-01T18:57:13 | sashavor | [] | Proposing metric card for Competition MATH | true |
1,188,266,410 | https://api.github.com/repos/huggingface/datasets/issues/4072 | https://github.com/huggingface/datasets/pull/4072 | 4,072 | Add installation instructions to image_process doc | closed | 1 | 2022-03-31T15:29:37 | 2022-03-31T17:05:46 | 2022-03-31T17:00:19 | mariosasko | [] | This PR adds the installation instructions for the Image feature to the image process doc. | true |
1,187,587,683 | https://api.github.com/repos/huggingface/datasets/issues/4071 | https://github.com/huggingface/datasets/issues/4071 | 4,071 | Loading issue for xuyeliu/notebookCDG dataset | closed | 1 | 2022-03-31T06:36:29 | 2022-03-31T08:17:01 | 2022-03-31T08:16:16 | Jun-jie-Huang | [
"dataset bug"
] | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_note... | false |
1,186,810,205 | https://api.github.com/repos/huggingface/datasets/issues/4070 | https://github.com/huggingface/datasets/pull/4070 | 4,070 | Create metric card for seqeval | closed | 1 | 2022-03-30T18:08:01 | 2022-04-01T19:02:58 | 2022-04-01T18:57:25 | sashavor | [] | Proposing metric card for seqeval. Not sure which values to report for Popular papers though. | true |
1,186,790,578 | https://api.github.com/repos/huggingface/datasets/issues/4069 | https://github.com/huggingface/datasets/pull/4069 | 4,069 | Add support for metadata files to `imagefolder` | closed | 7 | 2022-03-30T17:47:51 | 2022-05-03T12:49:00 | 2022-05-03T12:42:16 | mariosasko | [] | This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset.
To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure... | true |
1,186,765,422 | https://api.github.com/repos/huggingface/datasets/issues/4068 | https://github.com/huggingface/datasets/pull/4068 | 4,068 | Improve out of bounds error message | closed | 1 | 2022-03-30T17:22:10 | 2022-03-31T08:39:08 | 2022-03-31T08:33:57 | lhoestq | [] | In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case.
I replaced it with a message that is very sim... | true |
1,186,731,905 | https://api.github.com/repos/huggingface/datasets/issues/4067 | https://github.com/huggingface/datasets/pull/4067 | 4,067 | Update datasets task tags to align tags with models | closed | 3 | 2022-03-30T16:49:32 | 2022-04-13T17:37:27 | 2022-04-13T17:31:11 | lhoestq | [] | **Requires https://github.com/huggingface/datasets/pull/4066 to be merged first**
Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too muc... | true |
1,186,728,104 | https://api.github.com/repos/huggingface/datasets/issues/4066 | https://github.com/huggingface/datasets/pull/4066 | 4,066 | Tasks alignment with models | closed | 8 | 2022-03-30T16:45:56 | 2022-04-13T13:12:52 | 2022-04-08T12:20:00 | lhoestq | [] | I updated our `tasks.json` file with the new task taxonomy that is aligned with models.
The rule that defines a task is the following:
**Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granular... | true |
1,186,722,478 | https://api.github.com/repos/huggingface/datasets/issues/4065 | https://github.com/huggingface/datasets/pull/4065 | 4,065 | Create metric card for METEOR | closed | 1 | 2022-03-30T16:40:30 | 2022-03-31T17:12:10 | 2022-03-31T17:07:50 | sashavor | [] | Proposing a metric card for METEOR | true |
1,186,650,321 | https://api.github.com/repos/huggingface/datasets/issues/4064 | https://github.com/huggingface/datasets/pull/4064 | 4,064 | Contributing MedMCQA dataset | closed | 15 | 2022-03-30T15:42:47 | 2022-05-06T09:40:40 | 2022-05-06T08:42:56 | monk1337 | [] | Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa )
**Name**: MedMCQA
**Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entranc... | true |
1,186,611,368 | https://api.github.com/repos/huggingface/datasets/issues/4063 | https://github.com/huggingface/datasets/pull/4063 | 4,063 | Increase max retries for GitHub metrics | closed | 1 | 2022-03-30T15:12:48 | 2022-03-31T14:42:52 | 2022-03-31T14:37:47 | albertvillanova | [] | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | true |
1,186,330,732 | https://api.github.com/repos/huggingface/datasets/issues/4062 | https://github.com/huggingface/datasets/issues/4062 | 4,062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | closed | 10 | 2022-03-30T11:39:41 | 2024-06-09T12:12:46 | 2022-03-31T08:18:04 | aapot | [
"dataset bug"
] | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than ... | false |
1,186,317,071 | https://api.github.com/repos/huggingface/datasets/issues/4061 | https://github.com/huggingface/datasets/issues/4061 | 4,061 | Loading cnn_dailymail dataset failed | closed | 1 | 2022-03-30T11:29:02 | 2022-03-30T13:36:14 | 2022-03-30T13:36:14 | Arij-Aladel | [
"bug",
"duplicate"
] | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.... | false |
1,186,281,033 | https://api.github.com/repos/huggingface/datasets/issues/4060 | https://github.com/huggingface/datasets/pull/4060 | 4,060 | Deprecate canonical Multilingual Librispeech | closed | 7 | 2022-03-30T10:56:56 | 2022-04-01T12:54:05 | 2022-04-01T12:48:51 | polinaeterna | [] | Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming.
However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not wor... | true |
1,186,149,949 | https://api.github.com/repos/huggingface/datasets/issues/4059 | https://github.com/huggingface/datasets/pull/4059 | 4,059 | Load GitHub datasets from Hub | closed | 10 | 2022-03-30T09:21:56 | 2022-09-16T12:43:26 | 2022-09-16T12:40:43 | albertvillanova | [] | We have recurrently had connection errors when requesting GitHub because sometimes the site is not available.
This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub.
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036 | true |
1,185,611,600 | https://api.github.com/repos/huggingface/datasets/issues/4058 | https://github.com/huggingface/datasets/pull/4058 | 4,058 | Updated annotations for nli_tr dataset | closed | 2 | 2022-03-29T23:46:59 | 2022-04-12T20:55:12 | 2022-04-12T10:37:22 | e-budur | [] | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for u... | true |
1,185,442,001 | https://api.github.com/repos/huggingface/datasets/issues/4057 | https://github.com/huggingface/datasets/issues/4057 | 4,057 | `load_dataset` consumes too much memory for audio + tar archives | closed | 18 | 2022-03-29T21:38:55 | 2022-08-16T10:22:55 | 2022-08-16T10:22:55 | JFCeron | [
"bug"
] |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
... | false |
1,185,155,775 | https://api.github.com/repos/huggingface/datasets/issues/4056 | https://github.com/huggingface/datasets/issues/4056 | 4,056 | Unexpected behavior of _TempDirWithCustomCleanup | open | 2 | 2022-03-29T16:58:22 | 2022-03-30T15:08:04 | null | JonasGeiping | [
"bug"
] | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I ... | false |
1,184,976,292 | https://api.github.com/repos/huggingface/datasets/issues/4055 | https://github.com/huggingface/datasets/pull/4055 | 4,055 | [DO NOT MERGE] Test doc-builder | closed | 2 | 2022-03-29T14:39:02 | 2022-03-30T12:31:14 | 2022-03-30T12:25:52 | lewtun | [] | This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets` | true |
1,184,575,368 | https://api.github.com/repos/huggingface/datasets/issues/4054 | https://github.com/huggingface/datasets/pull/4054 | 4,054 | Support float data types in pearsonr/spearmanr metrics | closed | 1 | 2022-03-29T09:29:10 | 2022-03-29T14:07:59 | 2022-03-29T14:02:20 | albertvillanova | [] | Fix #4053. | true |
1,184,500,378 | https://api.github.com/repos/huggingface/datasets/issues/4053 | https://github.com/huggingface/datasets/issues/4053 | 4,053 | Modify datatype from `int32` to `float` for pearsonr, spearmanr. | closed | 1 | 2022-03-29T08:27:41 | 2022-03-29T14:02:20 | 2022-03-29T14:02:20 | woodywarhol9 | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
- Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'.
**Describe the ... | false |
1,184,447,977 | https://api.github.com/repos/huggingface/datasets/issues/4052 | https://github.com/huggingface/datasets/issues/4052 | 4,052 | metric = metric_cls( TypeError: 'NoneType' object is not callable | closed | 1 | 2022-03-29T07:43:08 | 2022-03-29T14:06:01 | 2022-03-29T14:06:01 | klyuhang9 | [] | Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
| false |
1,184,400,179 | https://api.github.com/repos/huggingface/datasets/issues/4051 | https://github.com/huggingface/datasets/issues/4051 | 4,051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | closed | 5 | 2022-03-29T07:00:31 | 2022-05-08T07:27:32 | 2022-03-29T08:29:25 | klyuhang9 | [] | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your... | false |
1,184,346,501 | https://api.github.com/repos/huggingface/datasets/issues/4050 | https://github.com/huggingface/datasets/pull/4050 | 4,050 | Add RVL-CDIP dataset | closed | 14 | 2022-03-29T06:00:02 | 2022-04-22T09:55:07 | 2022-04-21T17:15:41 | dnaveenr | [] | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added ... | true |
1,183,832,893 | https://api.github.com/repos/huggingface/datasets/issues/4049 | https://github.com/huggingface/datasets/pull/4049 | 4,049 | Create metric card for the Code Eval metric | closed | 3 | 2022-03-28T18:34:23 | 2022-03-29T13:38:12 | 2022-03-29T13:32:50 | sashavor | [] | Creating initial Code Eval metric card | true |
1,183,804,576 | https://api.github.com/repos/huggingface/datasets/issues/4048 | https://github.com/huggingface/datasets/issues/4048 | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | closed | 3 | 2022-03-28T18:12:04 | 2022-04-08T12:29:30 | 2022-04-08T12:29:30 | trentonstrong | [
"bug",
"good first issue"
] | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/t... | false |
1,183,789,237 | https://api.github.com/repos/huggingface/datasets/issues/4047 | https://github.com/huggingface/datasets/issues/4047 | 4,047 | Dataset.unique(column: str) -> ArrowNotImplementedError | closed | 3 | 2022-03-28T17:59:32 | 2022-04-01T18:24:57 | 2022-04-01T18:24:57 | orkenstein | [
"bug"
] | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].col... | false |
1,183,723,360 | https://api.github.com/repos/huggingface/datasets/issues/4046 | https://github.com/huggingface/datasets/pull/4046 | 4,046 | Create metric card for XNLI | closed | 1 | 2022-03-28T16:57:58 | 2022-03-29T13:32:59 | 2022-03-29T13:27:30 | sashavor | [] | Proposing a metric card for XNLI | true |
1,183,661,091 | https://api.github.com/repos/huggingface/datasets/issues/4045 | https://github.com/huggingface/datasets/pull/4045 | 4,045 | Fix CLI dummy data generation | closed | 1 | 2022-03-28T16:09:15 | 2022-03-31T15:04:12 | 2022-03-31T14:59:06 | albertvillanova | [] | PR:
- #3868
broke the CLI dummy data generation.
Fix #4044. | true |
1,183,658,942 | https://api.github.com/repos/huggingface/datasets/issues/4044 | https://github.com/huggingface/datasets/issues/4044 | 4,044 | CLI dummy data generation is broken | closed | 0 | 2022-03-28T16:07:37 | 2022-03-31T14:59:06 | 2022-03-31T14:59:06 | albertvillanova | [
"bug"
] | ## Describe the bug
We get a TypeError when running CLI dummy data generation:
```shell
datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate
```
gives:
```
File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_... | false |
1,183,624,475 | https://api.github.com/repos/huggingface/datasets/issues/4043 | https://github.com/huggingface/datasets/pull/4043 | 4,043 | Create metric card for CUAD | closed | 1 | 2022-03-28T15:38:58 | 2022-03-29T15:20:56 | 2022-03-29T15:15:19 | sashavor | [] | Proposing a CUAD metric card | true |
1,183,599,461 | https://api.github.com/repos/huggingface/datasets/issues/4041 | https://github.com/huggingface/datasets/issues/4041 | 4,041 | Add support for IIIF in datasets | open | 1 | 2022-03-28T15:19:25 | 2022-04-05T18:20:53 | null | davanstrien | [
"enhancement"
] | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Inte... | false |
1,183,468,927 | https://api.github.com/repos/huggingface/datasets/issues/4039 | https://github.com/huggingface/datasets/pull/4039 | 4,039 | Support streaming xcopa dataset | closed | 1 | 2022-03-28T13:45:55 | 2022-03-28T16:26:48 | 2022-03-28T16:21:46 | albertvillanova | [] | null | true |
1,183,189,827 | https://api.github.com/repos/huggingface/datasets/issues/4038 | https://github.com/huggingface/datasets/pull/4038 | 4,038 | [DO NOT MERGE] Test doc-builder with skipped installation feature | closed | 2 | 2022-03-28T09:58:31 | 2023-09-24T10:01:05 | 2022-03-28T12:29:09 | lewtun | [] | This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`. | true |
1,183,144,486 | https://api.github.com/repos/huggingface/datasets/issues/4037 | https://github.com/huggingface/datasets/issues/4037 | 4,037 | Error while building documentation | closed | 2 | 2022-03-28T09:22:44 | 2022-03-28T10:01:52 | 2022-03-28T10:00:48 | albertvillanova | [
"bug"
] | ## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem... | false |
1,183,126,893 | https://api.github.com/repos/huggingface/datasets/issues/4036 | https://github.com/huggingface/datasets/pull/4036 | 4,036 | Fix building of documentation | closed | 2 | 2022-03-28T09:09:12 | 2023-09-24T09:55:34 | 2022-03-28T11:13:22 | albertvillanova | [] | Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make su... | true |
1,183,067,456 | https://api.github.com/repos/huggingface/datasets/issues/4035 | https://github.com/huggingface/datasets/pull/4035 | 4,035 | Add zero_division argument to precision and recall metrics | closed | 0 | 2022-03-28T08:19:14 | 2022-03-28T09:53:07 | 2022-03-28T09:53:06 | albertvillanova | [] | Fix #4025. | true |
1,183,033,285 | https://api.github.com/repos/huggingface/datasets/issues/4034 | https://github.com/huggingface/datasets/pull/4034 | 4,034 | Fix null checksum in xcopa dataset | closed | 0 | 2022-03-28T07:48:14 | 2022-03-28T08:06:14 | 2022-03-28T08:06:14 | albertvillanova | [] | null | true |
1,182,984,445 | https://api.github.com/repos/huggingface/datasets/issues/4033 | https://github.com/huggingface/datasets/pull/4033 | 4,033 | Fix checksum error in cats_vs_dogs dataset | closed | 1 | 2022-03-28T07:01:25 | 2022-03-28T07:49:39 | 2022-03-28T07:44:24 | albertvillanova | [] | Recent PR updated the metadata JSON file of cats_vs_dogs dataset:
- #3878
However, that new JSON file contains a None checksum.
This PR fixes it.
Fix #4032. | true |
1,182,595,697 | https://api.github.com/repos/huggingface/datasets/issues/4032 | https://github.com/huggingface/datasets/issues/4032 | 4,032 | can't download cats_vs_dogs dataset | closed | 1 | 2022-03-27T17:05:39 | 2022-03-28T07:44:24 | 2022-03-28T07:44:24 | RRaphaell | [
"bug"
] | ## Describe the bug
can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
loaded successfully.
## Actual results
NonMatchingCheck... | false |
1,182,415,124 | https://api.github.com/repos/huggingface/datasets/issues/4031 | https://github.com/huggingface/datasets/issues/4031 | 4,031 | Cannot load the dataset conll2012_ontonotesv5 | closed | 1 | 2022-03-27T07:38:23 | 2022-03-28T06:58:31 | 2022-03-28T06:31:18 | cathyxl | [
"bug"
] | ## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(dataset)
```
## Expected results
The datasets s... | false |
1,182,157,056 | https://api.github.com/repos/huggingface/datasets/issues/4030 | https://github.com/huggingface/datasets/pull/4030 | 4,030 | Use a constant for the articles regex in SQuAD v2 | closed | 1 | 2022-03-26T23:06:30 | 2022-04-12T16:30:45 | 2022-04-12T11:00:24 | bryant1410 | [] | The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks.
BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe rec... | true |
1,181,057,011 | https://api.github.com/repos/huggingface/datasets/issues/4029 | https://github.com/huggingface/datasets/issues/4029 | 4,029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | closed | 4 | 2022-03-25T17:31:33 | 2022-05-06T08:35:52 | 2022-05-06T08:35:52 | MoritzLaurer | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I wou... | false |
1,181,022,675 | https://api.github.com/repos/huggingface/datasets/issues/4028 | https://github.com/huggingface/datasets/pull/4028 | 4,028 | Fix docs on audio feature installation | closed | 1 | 2022-03-25T16:55:11 | 2022-03-31T16:20:47 | 2022-03-31T16:15:20 | albertvillanova | [] | This PR:
- Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]`
- Adds the warning for Linux users to install manually the non-Python package `libsndfile`
- Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP... | true |
1,180,991,344 | https://api.github.com/repos/huggingface/datasets/issues/4027 | https://github.com/huggingface/datasets/issues/4027 | 4,027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | closed | 2 | 2022-03-25T16:22:28 | 2022-04-07T10:29:52 | 2022-03-28T07:58:56 | MoritzLaurer | [
"bug",
"duplicate"
] | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`sq... | false |
1,180,968,774 | https://api.github.com/repos/huggingface/datasets/issues/4026 | https://github.com/huggingface/datasets/pull/4026 | 4,026 | Support streaming xtreme dataset for bucc18 config | closed | 1 | 2022-03-25T16:00:40 | 2022-03-25T16:26:50 | 2022-03-25T16:21:52 | albertvillanova | [] | Support streaming xtreme dataset for bucc18 config. | true |
1,180,963,105 | https://api.github.com/repos/huggingface/datasets/issues/4025 | https://github.com/huggingface/datasets/issues/4025 | 4,025 | Missing argument in precision/recall | closed | 1 | 2022-03-25T15:55:52 | 2022-03-28T09:53:06 | 2022-03-28T09:53:06 | Dref360 | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
[`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/... | false |
1,180,951,817 | https://api.github.com/repos/huggingface/datasets/issues/4024 | https://github.com/huggingface/datasets/pull/4024 | 4,024 | Doc: image_process small tip | closed | 2 | 2022-03-25T15:44:32 | 2022-03-31T15:35:35 | 2022-03-31T15:30:20 | FrancescoSaverioZuppichini | [] | I've added a small tip in the `image_process` doc | true |
1,180,840,399 | https://api.github.com/repos/huggingface/datasets/issues/4023 | https://github.com/huggingface/datasets/pull/4023 | 4,023 | Replace yahoo_answers_topics data url | closed | 2 | 2022-03-25T14:08:57 | 2022-03-28T10:12:56 | 2022-03-28T10:07:52 | lhoestq | [] | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | true |
1,180,816,682 | https://api.github.com/repos/huggingface/datasets/issues/4022 | https://github.com/huggingface/datasets/pull/4022 | 4,022 | Replace dbpedia_14 data url | closed | 1 | 2022-03-25T13:47:21 | 2022-03-25T15:03:37 | 2022-03-25T14:58:49 | lhoestq | [] | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | true |
1,180,805,092 | https://api.github.com/repos/huggingface/datasets/issues/4021 | https://github.com/huggingface/datasets/pull/4021 | 4,021 | Fix `map` remove_columns on empty dataset | closed | 1 | 2022-03-25T13:36:29 | 2022-03-29T13:41:31 | 2022-03-29T13:35:44 | lhoestq | [] | On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns:
```python
>>> ds = datasets.load_dataset("glue", "rte")
>>> ds_filtered = ds.filter(lambda x: x["label"] != -1)
>>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"])
>>> print(repr(ds_mapped... | true |
1,180,636,754 | https://api.github.com/repos/huggingface/datasets/issues/4020 | https://github.com/huggingface/datasets/pull/4020 | 4,020 | Replace amazon_polarity data URL | closed | 1 | 2022-03-25T10:50:57 | 2022-03-25T15:02:36 | 2022-03-25T14:57:41 | lhoestq | [] | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | true |
1,180,628,293 | https://api.github.com/repos/huggingface/datasets/issues/4019 | https://github.com/huggingface/datasets/pull/4019 | 4,019 | Make yelp_polarity streamable | closed | 2 | 2022-03-25T10:42:51 | 2022-03-25T15:02:19 | 2022-03-25T14:57:16 | lhoestq | [] | It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive` | true |
1,180,622,816 | https://api.github.com/repos/huggingface/datasets/issues/4018 | https://github.com/huggingface/datasets/pull/4018 | 4,018 | Replace yelp_review_full data url | closed | 1 | 2022-03-25T10:37:18 | 2022-03-25T15:01:02 | 2022-03-25T14:56:10 | lhoestq | [] | I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive.
Close https://github.com/huggingface/datasets/issues/4005 | true |
1,180,595,160 | https://api.github.com/repos/huggingface/datasets/issues/4017 | https://github.com/huggingface/datasets/pull/4017 | 4,017 | Support streaming scan dataset | closed | 1 | 2022-03-25T10:11:28 | 2022-03-25T12:08:55 | 2022-03-25T12:03:52 | albertvillanova | [] | null | true |
1,180,557,828 | https://api.github.com/repos/huggingface/datasets/issues/4016 | https://github.com/huggingface/datasets/pull/4016 | 4,016 | Support streaming blimp dataset | closed | 1 | 2022-03-25T09:39:10 | 2022-03-25T11:19:18 | 2022-03-25T11:14:13 | albertvillanova | [] | null | true |
1,180,510,856 | https://api.github.com/repos/huggingface/datasets/issues/4015 | https://github.com/huggingface/datasets/issues/4015 | 4,015 | Can not correctly parse the classes with imagefolder | closed | 2 | 2022-03-25T08:51:17 | 2022-03-28T01:02:03 | 2022-03-25T09:27:56 | YiSyuanChen | [
"bug"
] | ## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n01440764/
- ILSVRC2012_val_00000293.jpg
... | false |
1,180,481,229 | https://api.github.com/repos/huggingface/datasets/issues/4014 | https://github.com/huggingface/datasets/pull/4014 | 4,014 | Support streaming id_clickbait dataset | closed | 1 | 2022-03-25T08:18:28 | 2022-03-25T08:58:31 | 2022-03-25T08:53:32 | albertvillanova | [] | null | true |
1,180,427,174 | https://api.github.com/repos/huggingface/datasets/issues/4013 | https://github.com/huggingface/datasets/issues/4013 | 4,013 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM" | closed | 2 | 2022-03-25T07:12:02 | 2022-04-04T08:05:01 | 2022-03-25T14:16:11 | hazalturkmen | [] | ## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://h... | false |
1,180,350,083 | https://api.github.com/repos/huggingface/datasets/issues/4012 | https://github.com/huggingface/datasets/pull/4012 | 4,012 | Rename wer to cer | closed | 0 | 2022-03-25T05:06:05 | 2022-03-28T13:57:25 | 2022-03-28T13:57:25 | pmgautam | [] | wer variable changed to cer in README file
| true |
1,179,885,965 | https://api.github.com/repos/huggingface/datasets/issues/4011 | https://github.com/huggingface/datasets/pull/4011 | 4,011 | Fix SQuAD v2 metric docs on `references` format | closed | 2 | 2022-03-24T18:27:10 | 2023-07-11T09:35:46 | 2023-07-11T09:35:15 | bryant1410 | [
"transfer-to-evaluate"
] | `references` it's not a list of dictionaries but a dictionary that has a list in its values. | true |
1,179,848,036 | https://api.github.com/repos/huggingface/datasets/issues/4010 | https://github.com/huggingface/datasets/pull/4010 | 4,010 | Fix None issue with Sequence of dict | closed | 2 | 2022-03-24T17:58:59 | 2022-03-28T10:13:53 | 2022-03-28T10:08:40 | lhoestq | [] | `Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict.
```python
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example
return encode_neste... | true |
1,179,658,611 | https://api.github.com/repos/huggingface/datasets/issues/4009 | https://github.com/huggingface/datasets/issues/4009 | 4,009 | AMI load_dataset error: sndfile library not found | closed | 1 | 2022-03-24T15:13:38 | 2022-03-24T15:46:38 | 2022-03-24T15:17:29 | i-am-neo | [
"bug"
] | ## Describe the bug
Getting error message when loading AMI dataset.
## Steps to reproduce the bug
`python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
`
## Expected results
A clear and concise description of the expected results.
## Actual r... | false |
1,179,591,068 | https://api.github.com/repos/huggingface/datasets/issues/4008 | https://github.com/huggingface/datasets/pull/4008 | 4,008 | Support streaming daily_dialog dataset | closed | 1 | 2022-03-24T14:23:23 | 2022-03-24T15:29:01 | 2022-03-24T14:46:58 | albertvillanova | [] | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.