id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,274,010,628 | https://api.github.com/repos/huggingface/datasets/issues/4518 | https://github.com/huggingface/datasets/pull/4518 | 4,518 | Patch tests for hfh v0.8.0 | closed | 1 | 2022-06-16T19:45:32 | 2022-06-17T16:15:57 | 2022-06-17T16:06:07 | LysandreJik | [] | This PR patches testing utilities that would otherwise fail with hfh v0.8.0. | true |
1,273,960,476 | https://api.github.com/repos/huggingface/datasets/issues/4517 | https://github.com/huggingface/datasets/pull/4517 | 4,517 | Add tags for task_ids:summarization-* and task_categories:summarization* | closed | 2 | 2022-06-16T18:52:25 | 2022-07-08T15:14:23 | 2022-07-08T15:02:31 | hobson | [] | yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json
separate Pull Request will modify dataset_infos.json to add these tags
The Enron dataset (dataset id aeslc) is only tagged with:
arxiv:1906.03497'
languages:en
pretty_name:AESLC
... | true |
1,273,825,640 | https://api.github.com/repos/huggingface/datasets/issues/4516 | https://github.com/huggingface/datasets/pull/4516 | 4,516 | Fix hashing for python 3.9 | closed | 2 | 2022-06-16T16:42:31 | 2022-06-28T13:33:46 | 2022-06-28T13:23:06 | lhoestq | [] | In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function.
Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9
To make hashing deterministic when the globals are not in th... | true |
1,273,626,131 | https://api.github.com/repos/huggingface/datasets/issues/4515 | https://github.com/huggingface/datasets/pull/4515 | 4,515 | Add uppercased versions of image file extensions for automatic module inference | closed | 1 | 2022-06-16T14:14:49 | 2022-06-16T17:21:53 | 2022-06-16T17:11:41 | mariosasko | [] | Adds the uppercased versions of the image file extensions to the supported extensions.
Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision d... | true |
1,273,505,230 | https://api.github.com/repos/huggingface/datasets/issues/4514 | https://github.com/huggingface/datasets/issues/4514 | 4,514 | Allow .JPEG as a file extension | closed | 2 | 2022-06-16T12:36:20 | 2022-06-20T08:18:46 | 2022-06-16T17:11:40 | DiGyt | [
"bug"
] | ## Describe the bug
When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.
## Steps to reproduce the bu... | false |
1,273,450,338 | https://api.github.com/repos/huggingface/datasets/issues/4513 | https://github.com/huggingface/datasets/pull/4513 | 4,513 | Update Google Cloud Storage documentation and add Azure Blob Storage example | closed | 5 | 2022-06-16T11:46:09 | 2022-06-23T17:05:11 | 2022-06-23T16:54:59 | alvarobartt | [
"documentation"
] | While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code ... | true |
1,273,378,129 | https://api.github.com/repos/huggingface/datasets/issues/4512 | https://github.com/huggingface/datasets/pull/4512 | 4,512 | Add links to vision tasks scripts in ADD_NEW_DATASET template | closed | 2 | 2022-06-16T10:35:35 | 2022-07-08T14:07:50 | 2022-07-08T13:56:23 | mariosasko | [] | Add links to vision dataset scripts in the ADD_NEW_DATASET template. | true |
1,273,336,874 | https://api.github.com/repos/huggingface/datasets/issues/4511 | https://github.com/huggingface/datasets/pull/4511 | 4,511 | Support all negative values in ClassLabel | closed | 4 | 2022-06-16T09:59:39 | 2025-07-23T18:38:15 | 2022-06-16T13:54:07 | lhoestq | [] | We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3
Fix https://github.com/huggingface/datasets/issues/4508 | true |
1,273,260,396 | https://api.github.com/repos/huggingface/datasets/issues/4510 | https://github.com/huggingface/datasets/pull/4510 | 4,510 | Add regression test for `ArrowWriter.write_batch` when batch is empty | closed | 2 | 2022-06-16T08:53:51 | 2022-06-16T12:38:02 | 2022-06-16T12:28:19 | alvarobartt | [] | As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling... | true |
1,273,227,760 | https://api.github.com/repos/huggingface/datasets/issues/4509 | https://github.com/huggingface/datasets/pull/4509 | 4,509 | Support skipping Parquet to Arrow conversion when using Beam | closed | 3 | 2022-06-16T08:25:38 | 2022-11-07T16:22:41 | 2022-11-07T16:22:41 | albertvillanova | [] | null | true |
1,272,718,921 | https://api.github.com/repos/huggingface/datasets/issues/4508 | https://github.com/huggingface/datasets/issues/4508 | 4,508 | cast_storage method from datasets.features | closed | 2 | 2022-06-15T20:47:22 | 2022-06-16T13:54:07 | 2022-06-16T13:54:07 | romainremyb | [
"bug"
] | ## Describe the bug
A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets.
## Steps to reproduce the bug
Steps are:
- load whatever datset
- write a preprocessing function such ... | false |
1,272,615,932 | https://api.github.com/repos/huggingface/datasets/issues/4507 | https://github.com/huggingface/datasets/issues/4507 | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | closed | 2 | 2022-06-15T18:56:34 | 2022-06-16T10:40:08 | 2022-06-16T10:40:08 | liyucheng09 | [
"enhancement"
] | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_spl... | false |
1,272,516,895 | https://api.github.com/repos/huggingface/datasets/issues/4506 | https://github.com/huggingface/datasets/issues/4506 | 4,506 | Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results | closed | 5 | 2022-06-15T17:11:31 | 2023-02-16T03:14:32 | 2022-06-28T13:23:05 | DrMatters | [
"bug"
] | ## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sur... | false |
1,272,477,226 | https://api.github.com/repos/huggingface/datasets/issues/4505 | https://github.com/huggingface/datasets/pull/4505 | 4,505 | Fix double dots in data files | closed | 2 | 2022-06-15T16:31:04 | 2022-06-15T17:15:58 | 2022-06-15T17:05:53 | lhoestq | [] | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this a... | true |
1,272,418,480 | https://api.github.com/repos/huggingface/datasets/issues/4504 | https://github.com/huggingface/datasets/issues/4504 | 4,504 | Can you please add the Stanford dog dataset? | closed | 16 | 2022-06-15T15:39:35 | 2024-12-09T15:44:11 | 2023-10-18T18:55:30 | dgrnd4 | [
"good first issue",
"dataset request"
] | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github... | false |
1,272,367,055 | https://api.github.com/repos/huggingface/datasets/issues/4503 | https://github.com/huggingface/datasets/pull/4503 | 4,503 | Refactor and add metadata to fever dataset | closed | 5 | 2022-06-15T14:59:47 | 2022-07-06T11:54:15 | 2022-07-06T11:41:30 | albertvillanova | [] | Related to: #4452 and #3792. | true |
1,272,353,700 | https://api.github.com/repos/huggingface/datasets/issues/4502 | https://github.com/huggingface/datasets/issues/4502 | 4,502 | Logic bug in arrow_writer? | closed | 10 | 2022-06-15T14:50:00 | 2022-06-18T15:15:51 | 2022-06-18T15:15:51 | changjonathanc | [] | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values())... | false |
1,272,300,646 | https://api.github.com/repos/huggingface/datasets/issues/4501 | https://github.com/huggingface/datasets/pull/4501 | 4,501 | Corrected broken links in doc | closed | 1 | 2022-06-15T14:12:17 | 2022-06-15T15:11:05 | 2022-06-15T15:00:56 | clefourrier | [] | null | true |
1,272,281,992 | https://api.github.com/repos/huggingface/datasets/issues/4500 | https://github.com/huggingface/datasets/pull/4500 | 4,500 | Add `concatenate_datasets` for iterable datasets | closed | 3 | 2022-06-15T13:58:50 | 2022-06-28T21:25:39 | 2022-06-28T21:15:04 | lhoestq | [] | `concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets`
Fix https://github.com/huggingface/datasets/issues/2564
I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on... | true |
1,272,118,162 | https://api.github.com/repos/huggingface/datasets/issues/4499 | https://github.com/huggingface/datasets/pull/4499 | 4,499 | fix ETT m1/m2 test/val dataset | closed | 3 | 2022-06-15T11:51:02 | 2022-06-15T14:55:56 | 2022-06-15T14:45:13 | kashif | [] | https://huggingface.co/datasets/ett/discussions/1 | true |
1,272,100,549 | https://api.github.com/repos/huggingface/datasets/issues/4498 | https://github.com/huggingface/datasets/issues/4498 | 4,498 | WER and CER > 1 | closed | 1 | 2022-06-15T11:35:12 | 2022-06-15T16:38:05 | 2022-06-15T16:38:05 | sadrasabouri | [
"bug"
] | ## Describe the bug
It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd.
If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#... | false |
1,271,964,338 | https://api.github.com/repos/huggingface/datasets/issues/4497 | https://github.com/huggingface/datasets/pull/4497 | 4,497 | Re-add download_manager module in utils | closed | 5 | 2022-06-15T09:44:33 | 2022-06-15T10:33:28 | 2022-06-15T10:23:44 | lhoestq | [] | https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager`
This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager`
This PR re-adds `datasets.utils.download_manager` without circular imports.
We could also... | true |
1,271,945,704 | https://api.github.com/repos/huggingface/datasets/issues/4496 | https://github.com/huggingface/datasets/pull/4496 | 4,496 | Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity | closed | 2 | 2022-06-15T09:29:16 | 2022-07-07T17:06:51 | 2022-07-07T16:55:48 | alvarobartt | [] | As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose. | true |
1,271,851,025 | https://api.github.com/repos/huggingface/datasets/issues/4495 | https://github.com/huggingface/datasets/pull/4495 | 4,495 | Fix patching module that doesn't exist | closed | 1 | 2022-06-15T08:17:50 | 2022-06-15T16:40:49 | 2022-06-15T08:54:09 | lhoestq | [] | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
Bug introduced by #4375
Fix https://github.com/hugging... | true |
1,271,850,599 | https://api.github.com/repos/huggingface/datasets/issues/4494 | https://github.com/huggingface/datasets/issues/4494 | 4,494 | Patching fails for modules that are not installed or don't exist | closed | 0 | 2022-06-15T08:17:29 | 2022-06-15T08:54:09 | 2022-06-15T08:54:09 | lhoestq | [] | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
We use patching to extend such functions to support remot... | false |
1,271,306,385 | https://api.github.com/repos/huggingface/datasets/issues/4493 | https://github.com/huggingface/datasets/pull/4493 | 4,493 | Add `@transmit_format` in `flatten` | closed | 4 | 2022-06-14T20:09:09 | 2022-09-27T11:37:25 | 2022-09-27T10:48:54 | alvarobartt | [] | As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated.
**Edit**: according to @mariosasko com... | true |
1,271,112,497 | https://api.github.com/repos/huggingface/datasets/issues/4492 | https://github.com/huggingface/datasets/pull/4492 | 4,492 | Pin the revision in imagenet download links | closed | 1 | 2022-06-14T17:15:17 | 2022-06-14T17:35:13 | 2022-06-14T17:25:45 | lhoestq | [] | Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.
cc @mariosasko | true |
1,270,803,822 | https://api.github.com/repos/huggingface/datasets/issues/4491 | https://github.com/huggingface/datasets/issues/4491 | 4,491 | Dataset Viewer issue for Pavithree/test | closed | 1 | 2022-06-14T13:23:10 | 2022-06-14T14:37:21 | 2022-06-14T14:34:33 | Pavithree | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missi... | false |
1,270,719,074 | https://api.github.com/repos/huggingface/datasets/issues/4490 | https://github.com/huggingface/datasets/issues/4490 | 4,490 | Use `torch.nested_tensor` for arrays of varying length in torch formatter | open | 2 | 2022-06-14T12:19:40 | 2023-07-07T13:02:58 | null | mariosasko | [
"enhancement"
] | Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`.
The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature. | false |
1,270,706,195 | https://api.github.com/repos/huggingface/datasets/issues/4489 | https://github.com/huggingface/datasets/pull/4489 | 4,489 | Add SV-Ident dataset | closed | 5 | 2022-06-14T12:09:00 | 2022-06-20T08:48:26 | 2022-06-20T08:37:27 | e-tornike | [] | null | true |
1,270,613,857 | https://api.github.com/repos/huggingface/datasets/issues/4488 | https://github.com/huggingface/datasets/pull/4488 | 4,488 | Update PASS dataset version | closed | 1 | 2022-06-14T10:47:14 | 2022-06-14T16:41:55 | 2022-06-14T16:32:28 | mariosasko | [] | Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt).
PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2. | true |
1,270,525,163 | https://api.github.com/repos/huggingface/datasets/issues/4487 | https://github.com/huggingface/datasets/pull/4487 | 4,487 | Support streaming UDHR dataset | closed | 1 | 2022-06-14T09:33:33 | 2022-06-15T05:09:22 | 2022-06-15T04:59:49 | albertvillanova | [] | This PR:
- Adds support for streaming UDHR dataset
- Adds the BCP 47 language code as feature | true |
1,269,518,084 | https://api.github.com/repos/huggingface/datasets/issues/4486 | https://github.com/huggingface/datasets/pull/4486 | 4,486 | Add CCAgT dataset | closed | 8 | 2022-06-13T14:20:19 | 2022-07-04T14:37:03 | 2022-07-04T14:25:45 | johnnv1 | [] | As described in #4075
I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image. | true |
1,269,463,054 | https://api.github.com/repos/huggingface/datasets/issues/4485 | https://github.com/huggingface/datasets/pull/4485 | 4,485 | Fix cast to null | closed | 1 | 2022-06-13T13:44:32 | 2022-06-14T13:43:54 | 2022-06-14T13:34:14 | lhoestq | [] | It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type.
Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type).
Fix https://github.com/hug... | true |
1,269,383,811 | https://api.github.com/repos/huggingface/datasets/issues/4484 | https://github.com/huggingface/datasets/pull/4484 | 4,484 | Better ImportError message when a dataset script dependency is missing | closed | 4 | 2022-06-13T12:44:37 | 2022-07-08T14:30:44 | 2022-06-13T13:50:47 | lhoestq | [] | When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable.
I improved it from
```
ImportError: To be able to use bigbench, you need to insta... | true |
1,269,253,840 | https://api.github.com/repos/huggingface/datasets/issues/4483 | https://github.com/huggingface/datasets/issues/4483 | 4,483 | Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists | closed | 1 | 2022-06-13T10:47:52 | 2022-06-14T13:34:14 | 2022-06-14T13:34:14 | sanderland | [
"bug"
] | ## Describe the bug
Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'.
This appears to be due to the interaction of arrow internals and some assumptions made by datasets.
T... | false |
1,269,237,447 | https://api.github.com/repos/huggingface/datasets/issues/4482 | https://github.com/huggingface/datasets/pull/4482 | 4,482 | Test that TensorFlow is not imported on startup | closed | 3 | 2022-06-13T10:33:49 | 2023-10-12T06:31:39 | 2023-10-11T09:11:56 | lhoestq | [] | TF takes some time to be imported, and also uses some GPU memory.
I just added a test to make sure that in the future it's never imported by default when
```python
import datasets
```
is called.
Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` bra... | true |
1,269,187,792 | https://api.github.com/repos/huggingface/datasets/issues/4481 | https://github.com/huggingface/datasets/pull/4481 | 4,481 | Fix iwslt2017 | closed | 4 | 2022-06-13T09:51:21 | 2022-10-26T09:09:31 | 2022-06-13T10:40:18 | lhoestq | [] | The files were moved to google drive, I hosted them on the Hub instead (ok according to the license)
I also updated the `datasets_infos.json` | true |
1,268,921,567 | https://api.github.com/repos/huggingface/datasets/issues/4480 | https://github.com/huggingface/datasets/issues/4480 | 4,480 | Bigbench tensorflow GPU dependency | closed | 3 | 2022-06-13T05:24:06 | 2022-06-14T19:45:24 | 2022-06-14T19:45:23 | cceyda | [
"bug"
] | ## Describe the bug
Loading bigbech
```py
from datasets import load_dataset
dataset = load_dataset("bigbench","swedish_to_german_proverbs")
```
tries to use gpu and fails with OOM with the following error
```
Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, genera... | false |
1,268,558,237 | https://api.github.com/repos/huggingface/datasets/issues/4479 | https://github.com/huggingface/datasets/pull/4479 | 4,479 | Include entity positions as feature in ReCoRD | closed | 3 | 2022-06-12T11:56:28 | 2022-08-19T23:23:02 | 2022-08-19T13:23:48 | richarddwang | [] | https://huggingface.co/datasets/super_glue/viewer/record/validation
TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD.
Currently, the loading script ignores the entity positions ("entity_start",... | true |
1,268,358,213 | https://api.github.com/repos/huggingface/datasets/issues/4478 | https://github.com/huggingface/datasets/issues/4478 | 4,478 | Dataset slow during model training | open | 5 | 2022-06-11T19:40:19 | 2022-06-14T12:04:31 | null | lehrig | [
"bug"
] | ## Describe the bug
While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.
First, I have optimized my dataset following https://discuss.huggingface.co/... | false |
1,268,308,986 | https://api.github.com/repos/huggingface/datasets/issues/4477 | https://github.com/huggingface/datasets/issues/4477 | 4,477 | Dataset Viewer issue for fgrezes/WIESP2022-NER | closed | 2 | 2022-06-11T15:49:17 | 2022-07-18T13:07:33 | 2022-07-18T13:07:33 | AshTayade | [] | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | false |
1,267,987,499 | https://api.github.com/repos/huggingface/datasets/issues/4476 | https://github.com/huggingface/datasets/issues/4476 | 4,476 | `to_pandas` doesn't take into account format. | closed | 4 | 2022-06-10T20:25:31 | 2022-06-15T17:41:41 | 2022-06-15T17:41:41 | Dref360 | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`.
**Describe the solu... | false |
1,267,798,451 | https://api.github.com/repos/huggingface/datasets/issues/4475 | https://github.com/huggingface/datasets/pull/4475 | 4,475 | Improve error message for missing pacakges from inside dataset script | closed | 3 | 2022-06-10T16:59:36 | 2022-10-06T13:46:26 | 2022-06-13T13:16:43 | mariosasko | [] | Improve the error message for missing packages from inside a dataset script:
With this change, the error message for missing packages for `bigbench` looks as follows:
```
ImportError: To be able to use bigbench, you need to install the following dependencies:
- 'bigbench' using 'pip install "bigbench @ ht... | true |
1,267,767,541 | https://api.github.com/repos/huggingface/datasets/issues/4474 | https://github.com/huggingface/datasets/pull/4474 | 4,474 | [Docs] How to use with PyTorch page | closed | 1 | 2022-06-10T16:25:49 | 2022-06-14T14:40:32 | 2022-06-14T14:04:33 | lhoestq | [
"documentation"
] | Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :)
cc @Rocketknight1 we can try to align both documentations contents now I think
cc @s... | true |
1,267,555,994 | https://api.github.com/repos/huggingface/datasets/issues/4473 | https://github.com/huggingface/datasets/pull/4473 | 4,473 | Add SST-2 dataset | closed | 5 | 2022-06-10T13:37:26 | 2022-06-13T14:11:34 | 2022-06-13T14:01:09 | albertvillanova | [] | Add SST-2 dataset.
Currently it is part of GLUE benchmark.
This PR adds it as a standalone dataset.
CC: @julien-c | true |
1,267,488,523 | https://api.github.com/repos/huggingface/datasets/issues/4472 | https://github.com/huggingface/datasets/pull/4472 | 4,472 | Fix 401 error for unauthticated requests to non-existing repos | closed | 1 | 2022-06-10T12:38:11 | 2022-06-10T13:05:11 | 2022-06-10T12:55:57 | lhoestq | [] | The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos.
This PR add support for the 401 error and fixes the CI fails on `master` | true |
1,267,475,268 | https://api.github.com/repos/huggingface/datasets/issues/4471 | https://github.com/huggingface/datasets/issues/4471 | 4,471 | CI error with repo lhoestq/_dummy | closed | 1 | 2022-06-10T12:26:06 | 2022-06-10T13:24:53 | 2022-06-10T13:24:53 | albertvillanova | [
"bug"
] | ## Describe the bug
CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269
```
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoest... | false |
1,267,470,051 | https://api.github.com/repos/huggingface/datasets/issues/4470 | https://github.com/huggingface/datasets/pull/4470 | 4,470 | Reorder returned validation/test splits in script template | closed | 1 | 2022-06-10T12:21:13 | 2022-06-10T18:04:10 | 2022-06-10T17:54:50 | albertvillanova | [] | null | true |
1,267,213,849 | https://api.github.com/repos/huggingface/datasets/issues/4469 | https://github.com/huggingface/datasets/pull/4469 | 4,469 | Replace data URLs in wider_face dataset once hosted on the Hub | closed | 1 | 2022-06-10T08:13:25 | 2022-06-10T16:42:08 | 2022-06-10T16:32:46 | albertvillanova | [] | This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub.
They also informed us that their dataset is licensed under CC BY-NC-ND. | true |
1,266,715,742 | https://api.github.com/repos/huggingface/datasets/issues/4468 | https://github.com/huggingface/datasets/pull/4468 | 4,468 | Generalize tutorials for audio and vision | closed | 1 | 2022-06-09T22:00:44 | 2022-06-14T16:22:02 | 2022-06-14T16:12:00 | stevhliu | [
"documentation"
] | This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their data... | true |
1,266,218,358 | https://api.github.com/repos/huggingface/datasets/issues/4467 | https://github.com/huggingface/datasets/issues/4467 | 4,467 | Transcript string 'null' converted to [None] by load_dataset() | closed | 3 | 2022-06-09T14:26:00 | 2023-07-04T02:18:39 | 2022-06-09T16:29:02 | mbarnig | [
"bug"
] | ## Issue
I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script
`ds_train1 = mydataset.map(prepare_dataset)`
the following error was issued:
``` ... | false |
1,266,159,920 | https://api.github.com/repos/huggingface/datasets/issues/4466 | https://github.com/huggingface/datasets/pull/4466 | 4,466 | Optimize contiguous shard and select | closed | 3 | 2022-06-09T13:45:39 | 2022-06-14T16:04:30 | 2022-06-14T15:54:45 | lhoestq | [] | Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular:
- the shard/select operation will be much faster
- reading speed will be much faster in t... | true |
1,265,754,479 | https://api.github.com/repos/huggingface/datasets/issues/4465 | https://github.com/huggingface/datasets/pull/4465 | 4,465 | Fix bigbench config names | closed | 1 | 2022-06-09T08:06:19 | 2022-06-09T14:38:36 | 2022-06-09T14:29:19 | lhoestq | [] | Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench | true |
1,265,682,931 | https://api.github.com/repos/huggingface/datasets/issues/4464 | https://github.com/huggingface/datasets/pull/4464 | 4,464 | Extend support for streaming datasets that use xml.dom.minidom.parse | closed | 1 | 2022-06-09T06:58:25 | 2022-06-09T08:43:24 | 2022-06-09T08:34:16 | albertvillanova | [] | This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.
This PR adds support for streaming datasets like "Yaxin/SemEval2015".
Fix #4453. | true |
1,265,093,211 | https://api.github.com/repos/huggingface/datasets/issues/4463 | https://github.com/huggingface/datasets/pull/4463 | 4,463 | Use config_id to check split sizes instead of config name | closed | 2 | 2022-06-08T17:45:24 | 2023-09-24T10:03:00 | 2022-06-09T08:06:37 | lhoestq | [] | Fix https://github.com/huggingface/datasets/issues/4462 | true |
1,265,079,347 | https://api.github.com/repos/huggingface/datasets/issues/4462 | https://github.com/huggingface/datasets/issues/4462 | 4,462 | BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter | open | 3 | 2022-06-08T17:31:24 | 2022-07-05T07:39:55 | null | lhoestq | [
"bug"
] | As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`.
This is because it will check for expected the number ... | false |
1,264,800,451 | https://api.github.com/repos/huggingface/datasets/issues/4461 | https://github.com/huggingface/datasets/issues/4461 | 4,461 | AttributeError: module 'datasets' has no attribute 'load_dataset' | closed | 4 | 2022-06-08T13:59:20 | 2024-03-25T12:58:29 | 2022-06-08T14:41:00 | AlexNLP | [
"bug"
] | ## Describe the bug
I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric.
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
| false |
1,264,644,205 | https://api.github.com/repos/huggingface/datasets/issues/4460 | https://github.com/huggingface/datasets/pull/4460 | 4,460 | Drop Python 3.6 support | closed | 5 | 2022-06-08T12:10:18 | 2022-07-26T19:16:39 | 2022-07-26T19:04:21 | mariosasko | [] | Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
| true |
1,264,636,481 | https://api.github.com/repos/huggingface/datasets/issues/4459 | https://github.com/huggingface/datasets/pull/4459 | 4,459 | Add and fix language tags for udhr dataset | closed | 1 | 2022-06-08T12:03:42 | 2022-06-08T12:36:24 | 2022-06-08T12:27:13 | albertvillanova | [] | Related to #4362. | true |
1,263,531,911 | https://api.github.com/repos/huggingface/datasets/issues/4457 | https://github.com/huggingface/datasets/pull/4457 | 4,457 | First draft of the docs for TF + Datasets | closed | 4 | 2022-06-07T16:06:48 | 2022-06-14T16:08:41 | 2022-06-14T15:59:08 | Rocketknight1 | [
"documentation"
] | I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now. | true |
1,263,241,449 | https://api.github.com/repos/huggingface/datasets/issues/4456 | https://github.com/huggingface/datasets/issues/4456 | 4,456 | Workflow for Tabular data | open | 8 | 2022-06-07T12:48:22 | 2023-03-06T08:53:55 | null | lhoestq | [
"enhancement",
"generic discussion"
] | Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal.
For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an arra... | false |
1,263,089,067 | https://api.github.com/repos/huggingface/datasets/issues/4455 | https://github.com/huggingface/datasets/pull/4455 | 4,455 | Update data URLs in fever dataset | closed | 1 | 2022-06-07T10:40:54 | 2022-06-08T07:24:54 | 2022-06-08T07:16:17 | albertvillanova | [] | As stated in their website, data owners updated their URLs on 28/04/2022.
This PR updates the data URLs.
Fix #4452. | true |
1,262,674,973 | https://api.github.com/repos/huggingface/datasets/issues/4454 | https://github.com/huggingface/datasets/issues/4454 | 4,454 | Dataset Viewer issue for Yaxin/SemEval2015 | closed | 1 | 2022-06-07T03:31:46 | 2022-06-07T11:53:11 | 2022-06-07T11:53:11 | WithYouTo | [
"duplicate",
"dataset-viewer"
] | ### Link
_No response_
### Description
the link could not visit
### Owner
_No response_ | false |
1,262,674,105 | https://api.github.com/repos/huggingface/datasets/issues/4453 | https://github.com/huggingface/datasets/issues/4453 | 4,453 | Dataset Viewer issue for Yaxin/SemEval2015 | closed | 3 | 2022-06-07T03:30:08 | 2022-06-09T08:34:16 | 2022-06-09T08:34:16 | WithYouTo | [] | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | false |
1,262,529,654 | https://api.github.com/repos/huggingface/datasets/issues/4452 | https://github.com/huggingface/datasets/issues/4452 | 4,452 | Trying to load FEVER dataset results in NonMatchingChecksumError | closed | 2 | 2022-06-06T23:13:15 | 2022-12-15T13:36:40 | 2022-06-08T07:16:16 | santhnm2 | [
"bug"
] | ## Describe the bug
Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`.
I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`.
## Steps to r... | false |
1,262,103,323 | https://api.github.com/repos/huggingface/datasets/issues/4451 | https://github.com/huggingface/datasets/pull/4451 | 4,451 | Use newer version of multi-news with fixes | closed | 2 | 2022-06-06T16:57:08 | 2022-06-07T17:40:01 | 2022-06-07T17:14:44 | JohnGiorgi | [] | Closes #4430. | true |
1,261,878,324 | https://api.github.com/repos/huggingface/datasets/issues/4450 | https://github.com/huggingface/datasets/pull/4450 | 4,450 | Update README.md of fquad | closed | 1 | 2022-06-06T13:52:41 | 2022-06-06T14:51:49 | 2022-06-06T14:43:03 | lhoestq | [] | null | true |
1,261,262,326 | https://api.github.com/repos/huggingface/datasets/issues/4449 | https://github.com/huggingface/datasets/issues/4449 | 4,449 | Rj | closed | 0 | 2022-06-06T02:24:32 | 2022-06-06T15:44:50 | 2022-06-06T15:44:50 | Aeckard45 | [] | import android.content.DialogInterface;
import android.database.Cursor;
import android.os.Bundle;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
import androidx.appcompat.app.AlertDialog;
import androidx.appcompat... | false |
1,260,966,129 | https://api.github.com/repos/huggingface/datasets/issues/4448 | https://github.com/huggingface/datasets/issues/4448 | 4,448 | New Preprocessing Feature - Deduplication [Request] | open | 2 | 2022-06-05T05:32:56 | 2023-12-12T07:52:40 | null | yuvalkirstain | [
"duplicate",
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.
A feature that allows one to easily deduplicate a dataset can be... | false |
1,260,041,805 | https://api.github.com/repos/huggingface/datasets/issues/4447 | https://github.com/huggingface/datasets/pull/4447 | 4,447 | Minor fixes/improvements in `scene_parse_150` card | closed | 1 | 2022-06-03T15:22:34 | 2022-06-06T15:50:25 | 2022-06-06T15:41:37 | mariosasko | [] | Add `paperswithcode_id` and fix some links in the `scene_parse_150` card. | true |
1,260,028,995 | https://api.github.com/repos/huggingface/datasets/issues/4446 | https://github.com/huggingface/datasets/pull/4446 | 4,446 | Add missing kwargs to docstrings | closed | 1 | 2022-06-03T15:10:27 | 2022-06-03T16:10:09 | 2022-06-03T16:01:29 | albertvillanova | [] | null | true |
1,259,947,568 | https://api.github.com/repos/huggingface/datasets/issues/4445 | https://github.com/huggingface/datasets/pull/4445 | 4,445 | Fix missing args in docstring of load_dataset_builder | closed | 1 | 2022-06-03T13:55:50 | 2022-06-03T14:35:32 | 2022-06-03T14:27:09 | albertvillanova | [] | Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other):
- https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path | true |
1,259,738,209 | https://api.github.com/repos/huggingface/datasets/issues/4444 | https://github.com/huggingface/datasets/pull/4444 | 4,444 | Fix kwargs in docstrings | closed | 1 | 2022-06-03T10:29:02 | 2022-06-03T11:01:28 | 2022-06-03T10:52:46 | albertvillanova | [] | To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards.
See:
- huggingface/doc-builder/issues/235 | true |
1,259,606,334 | https://api.github.com/repos/huggingface/datasets/issues/4443 | https://github.com/huggingface/datasets/issues/4443 | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | open | 7 | 2022-06-03T08:17:16 | 2023-09-25T12:15:08 | null | ZYMXIXI | [] | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | false |
1,258,589,276 | https://api.github.com/repos/huggingface/datasets/issues/4442 | https://github.com/huggingface/datasets/issues/4442 | 4,442 | Dataset Viewer issue for amazon_polarity | closed | 2 | 2022-06-02T19:18:38 | 2022-06-07T18:50:37 | 2022-06-07T18:50:37 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test
### Description
For some reason the train split is OK but the test split is not for this dataset:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cach... | false |
1,258,568,656 | https://api.github.com/repos/huggingface/datasets/issues/4441 | https://github.com/huggingface/datasets/issues/4441 | 4,441 | Dataset Viewer issue for aeslc | closed | 1 | 2022-06-02T18:57:12 | 2022-06-07T18:50:55 | 2022-06-07T18:50:55 | lewtun | [
"dataset-viewer"
] | ### Link
https://huggingface.co/datasets/aeslc
### Description
The dataset viewer can't find `dataset_infos.json` in it's cache:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf9... | false |
1,258,494,469 | https://api.github.com/repos/huggingface/datasets/issues/4440 | https://github.com/huggingface/datasets/pull/4440 | 4,440 | Update docs around audio and vision | closed | 2 | 2022-06-02T17:42:03 | 2022-06-23T16:33:19 | 2022-06-23T16:23:02 | stevhliu | [
"documentation"
] | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without dig... | true |
1,258,434,111 | https://api.github.com/repos/huggingface/datasets/issues/4439 | https://github.com/huggingface/datasets/issues/4439 | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | closed | 3 | 2022-06-02T16:35:56 | 2022-06-03T08:44:17 | 2022-06-03T08:44:16 | drscotthawley | [
"bug"
] | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both c... | false |
1,258,255,394 | https://api.github.com/repos/huggingface/datasets/issues/4438 | https://github.com/huggingface/datasets/pull/4438 | 4,438 | Fix docstring of inspect_dataset | closed | 1 | 2022-06-02T14:21:10 | 2022-06-02T16:40:55 | 2022-06-02T16:32:27 | albertvillanova | [] | As pointed out by @sgugger:
- huggingface/doc-builder/issues/235 | true |
1,258,249,582 | https://api.github.com/repos/huggingface/datasets/issues/4437 | https://github.com/huggingface/datasets/pull/4437 | 4,437 | Add missing columns to `blended_skill_talk` | closed | 1 | 2022-06-02T14:16:26 | 2022-06-06T15:49:56 | 2022-06-06T15:41:25 | mariosasko | [] | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | true |
1,257,758,834 | https://api.github.com/repos/huggingface/datasets/issues/4436 | https://github.com/huggingface/datasets/pull/4436 | 4,436 | Fix directory names for LDC data in timit_asr dataset | closed | 1 | 2022-06-02T06:45:04 | 2022-06-02T09:32:56 | 2022-06-02T09:24:27 | albertvillanova | [] | Related to:
- #4422 | true |
1,257,496,552 | https://api.github.com/repos/huggingface/datasets/issues/4435 | https://github.com/huggingface/datasets/issues/4435 | 4,435 | Load a local cached dataset that has been modified | closed | 2 | 2022-06-02T01:51:49 | 2022-06-02T23:59:26 | 2022-06-02T23:59:18 | mihail911 | [
"bug"
] | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local syst... | false |
1,256,207,321 | https://api.github.com/repos/huggingface/datasets/issues/4434 | https://github.com/huggingface/datasets/pull/4434 | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | closed | 0 | 2022-06-01T14:53:15 | 2022-06-07T12:08:28 | 2022-06-07T09:24:09 | silverriver | [] | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my co... | true |
1,255,830,758 | https://api.github.com/repos/huggingface/datasets/issues/4433 | https://github.com/huggingface/datasets/pull/4433 | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | closed | 2 | 2022-06-01T12:09:56 | 2022-06-09T10:34:54 | 2022-06-09T10:26:07 | mariosasko | [] | Fix #4348 | true |
1,255,523,720 | https://api.github.com/repos/huggingface/datasets/issues/4432 | https://github.com/huggingface/datasets/pull/4432 | 4,432 | Fix builder docstring | closed | 1 | 2022-06-01T09:45:30 | 2022-06-02T17:43:47 | 2022-06-02T17:35:15 | albertvillanova | [] | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | true |
1,254,618,948 | https://api.github.com/repos/huggingface/datasets/issues/4431 | https://github.com/huggingface/datasets/pull/4431 | 4,431 | Add personaldialog datasets | closed | 5 | 2022-06-01T01:20:40 | 2022-06-11T12:40:23 | 2022-06-11T12:31:16 | silverriver | [] | It seems that all tests are passed | true |
1,254,412,591 | https://api.github.com/repos/huggingface/datasets/issues/4430 | https://github.com/huggingface/datasets/issues/4430 | 4,430 | Add ability to load newer, cleaner version of Multi-News | closed | 6 | 2022-05-31T21:00:44 | 2022-06-07T17:14:44 | 2022-06-07T17:14:44 | JohnGiorgi | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https... | false |
1,254,184,358 | https://api.github.com/repos/huggingface/datasets/issues/4429 | https://github.com/huggingface/datasets/pull/4429 | 4,429 | Update builder docstring for deprecated/added arguments | closed | 5 | 2022-05-31T17:37:25 | 2022-06-08T11:40:18 | 2022-06-08T11:31:45 | albertvillanova | [] | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | true |
1,254,092,818 | https://api.github.com/repos/huggingface/datasets/issues/4428 | https://github.com/huggingface/datasets/issues/4428 | 4,428 | Errors when building dummy data if you use nested _URLS | closed | 0 | 2022-05-31T16:10:57 | 2022-06-07T09:24:09 | 2022-06-07T09:24:09 | silverriver | [
"bug"
] | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/hom... | false |
1,253,959,313 | https://api.github.com/repos/huggingface/datasets/issues/4427 | https://github.com/huggingface/datasets/pull/4427 | 4,427 | Add HF.co for PRs/Issues for specific datasets | closed | 1 | 2022-05-31T14:31:21 | 2022-06-01T12:37:42 | 2022-06-01T12:29:02 | lhoestq | [] | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | true |
1,253,887,311 | https://api.github.com/repos/huggingface/datasets/issues/4426 | https://github.com/huggingface/datasets/issues/4426 | 4,426 | Add loading variable number of columns for different splits | closed | 1 | 2022-05-31T13:40:16 | 2022-06-03T16:25:25 | 2022-06-03T16:25:25 | DrMatters | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at ... | false |
1,253,641,604 | https://api.github.com/repos/huggingface/datasets/issues/4425 | https://github.com/huggingface/datasets/pull/4425 | 4,425 | Make extensions case-insensitive in timit_asr dataset | closed | 1 | 2022-05-31T10:10:04 | 2022-06-01T14:15:30 | 2022-06-01T14:06:51 | albertvillanova | [] | Related to #4422. | true |
1,253,542,488 | https://api.github.com/repos/huggingface/datasets/issues/4424 | https://github.com/huggingface/datasets/pull/4424 | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | closed | 1 | 2022-05-31T08:47:45 | 2022-05-31T13:50:50 | 2022-05-31T13:42:31 | albertvillanova | [] | Fix #4422. | true |
1,253,326,023 | https://api.github.com/repos/huggingface/datasets/issues/4423 | https://github.com/huggingface/datasets/pull/4423 | 4,423 | Add new dataset MMChat | closed | 2 | 2022-05-31T04:45:07 | 2022-06-11T12:40:52 | 2022-06-11T12:31:42 | silverriver | [] | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | true |
1,253,146,511 | https://api.github.com/repos/huggingface/datasets/issues/4422 | https://github.com/huggingface/datasets/issues/4422 | 4,422 | Cannot load timit_asr data set | closed | 6 | 2022-05-30T22:00:22 | 2022-06-02T06:34:05 | 2022-05-31T13:42:31 | bhaddow | [
"bug"
] | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug... | false |
1,253,059,467 | https://api.github.com/repos/huggingface/datasets/issues/4421 | https://github.com/huggingface/datasets/pull/4421 | 4,421 | Add extractor for bzip2-compressed files | closed | 0 | 2022-05-30T19:19:40 | 2022-06-06T15:22:50 | 2022-06-06T15:22:50 | osyvokon | [] | This change enables loading bzipped datasets, just like any other compressed dataset. | true |
1,252,739,239 | https://api.github.com/repos/huggingface/datasets/issues/4420 | https://github.com/huggingface/datasets/issues/4420 | 4,420 | Metric evaluation problems in multi-node, shared file system | closed | 6 | 2022-05-30T13:24:05 | 2023-07-11T09:33:18 | 2023-07-11T09:33:17 | gullabi | [
"bug"
] | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. c... | false |
1,252,652,896 | https://api.github.com/repos/huggingface/datasets/issues/4419 | https://github.com/huggingface/datasets/issues/4419 | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | closed | 3 | 2022-05-30T12:13:18 | 2022-09-30T16:01:37 | 2022-09-30T16:01:37 | alvarobartt | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library... | false |
1,252,506,268 | https://api.github.com/repos/huggingface/datasets/issues/4418 | https://github.com/huggingface/datasets/pull/4418 | 4,418 | Add dataset MMChat | closed | 0 | 2022-05-30T10:10:40 | 2022-05-30T14:58:18 | 2022-05-30T14:58:18 | silverriver | [] | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.