url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2022-04-02T12:40:10Z
2022-09-17T02:18:03Z
2022-04-04T06:44:34Z
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
completed
null
null
false
[ "Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted", "Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update...
https://api.github.com/repos/huggingface/datasets/issues/3185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3185/comments
https://api.github.com/repos/huggingface/datasets/issues/3185/events
https://github.com/huggingface/datasets/issues/3185
1,040,291,961
I_kwDODunzps4-AZh5
3,185
7z dataset preview not implemented?
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
2
2021-10-30T20:18:27Z
2022-04-12T11:48:16Z
2022-04-12T11:48:07Z
null
## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3185/timeline
null
completed
null
null
false
[ "It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.", "Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture ...
https://api.github.com/repos/huggingface/datasets/issues/5315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5315/comments
https://api.github.com/repos/huggingface/datasets/issues/5315/events
https://github.com/huggingface/datasets/issues/5315
1,470,026,797
I_kwDODunzps5XntQt
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
3
2022-11-30T18:02:15Z
2022-12-02T07:02:53Z
null
null
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48. ### Steps to reproduce the bug 1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py 2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this: ``` splits: - name: train num_bytes: 2973286 num_examples: 19747 ``` 3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271)) 4. run `load_dataset` and get the following error: ```python Traceback (most recent call last): File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run builder.download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__ instructions = make_file_instructions( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions name2filenames = { File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` 5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error. This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails. ### Expected behavior to be discussed? This can be solved by removing splits information from metadata file first. But I wonder if there is a better way. ### Environment info - Datasets version: 2.7.1 - Python version: 3.8.13
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5315/timeline
null
null
null
null
false
[ "EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info`...
https://api.github.com/repos/huggingface/datasets/issues/1593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1593/comments
https://api.github.com/repos/huggingface/datasets/issues/1593/events
https://github.com/huggingface/datasets/issues/1593
769,611,386
MDU6SXNzdWU3Njk2MTEzODY=
1,593
Access to key in DatasetDict map
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
3
2020-12-17T07:02:20Z
2022-10-05T13:47:28Z
2022-10-05T12:33:06Z
null
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1593/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1593/timeline
null
completed
null
null
false
[ "Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```", "I don't feel like adding an extra...
https://api.github.com/repos/huggingface/datasets/issues/1281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1281/comments
https://api.github.com/repos/huggingface/datasets/issues/1281/events
https://github.com/huggingface/datasets/pull/1281
759,203,317
MDExOlB1bGxSZXF1ZXN0NTM0MjQ0MTA1
1,281
adding hybrid_qa
[]
closed
false
null
0
2020-12-08T08:10:19Z
2020-12-08T18:09:28Z
2020-12-08T18:07:00Z
null
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1281/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1281.diff", "html_url": "https://github.com/huggingface/datasets/pull/1281", "merged_at": "2020-12-08T18:07:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1281" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2978/comments
https://api.github.com/repos/huggingface/datasets/issues/2978/events
https://github.com/huggingface/datasets/issues/2978
1,009,521,419
I_kwDODunzps48LBML
2,978
Run CI tests against non-production server
[]
open
false
null
2
2021-09-28T09:41:26Z
2021-09-28T15:23:50Z
null
null
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2978/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2978/timeline
null
null
null
null
false
[ "Hey @albertvillanova could you provide more context, including extracts from the discussion we had ?\r\n\r\nLet's ping @Pierrci @julien-c and @n1t0 for their opinion about that", "@julien-c increased the huggingface.co production workers in order to see if it solve [the 502 you had this morning](https://app.circ...
https://api.github.com/repos/huggingface/datasets/issues/3511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3511/comments
https://api.github.com/repos/huggingface/datasets/issues/3511/events
https://github.com/huggingface/datasets/issues/3511
1,092,170,411
I_kwDODunzps5BGTKr
3,511
Dataset
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
2
2022-01-03T02:03:23Z
2022-01-03T08:41:26Z
2022-01-03T08:23:07Z
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3511/timeline
null
completed
null
null
false
[ "Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks", "The dataset viewer was down tonight. It works again." ]
https://api.github.com/repos/huggingface/datasets/issues/4328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4328/comments
https://api.github.com/repos/huggingface/datasets/issues/4328/events
https://github.com/huggingface/datasets/pull/4328
1,233,856,690
PR_kwDODunzps43trrd
4,328
Fix and clean Apache Beam functionality
[]
closed
false
null
1
2022-05-12T11:41:07Z
2022-05-24T13:43:11Z
2022-05-24T13:34:32Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4328/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4328/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4328.diff", "html_url": "https://github.com/huggingface/datasets/pull/4328", "merged_at": "2022-05-24T13:34:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/4328.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4328" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1206/comments
https://api.github.com/repos/huggingface/datasets/issues/1206/events
https://github.com/huggingface/datasets/pull/1206
757,952,992
MDExOlB1bGxSZXF1ZXN0NTMzMjE2NDYw
1,206
Adding Enriched WebNLG dataset
[]
closed
false
null
3
2020-12-06T15:36:20Z
2020-12-09T09:40:32Z
2020-12-09T09:40:32Z
null
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1206/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "html_url": "https://github.com/huggingface/datasets/pull/1206", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206" }
true
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
https://api.github.com/repos/huggingface/datasets/issues/1014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1014/comments
https://api.github.com/repos/huggingface/datasets/issues/1014/events
https://github.com/huggingface/datasets/pull/1014
755,505,851
MDExOlB1bGxSZXF1ZXN0NTMxMjAzNzAz
1,014
Add SciTLDR Dataset (Take 2)
[]
closed
false
null
6
2020-12-02T18:22:50Z
2020-12-02T18:55:10Z
2020-12-02T18:37:58Z
null
Adds the SciTLDR Dataset by AI2 Added the `README.md` card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents Continued from #986
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1014/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1014/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1014.diff", "html_url": "https://github.com/huggingface/datasets/pull/1014", "merged_at": "2020-12-02T18:37:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1014.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1014" }
true
[ "@lhoestq please review this PR when you get free", "If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master", "> If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master\r\n\r\nThe same 3 tests are failing again :(\r\n```\r\nFAILED test...
https://api.github.com/repos/huggingface/datasets/issues/3540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
https://api.github.com/repos/huggingface/datasets/issues/3540/events
https://github.com/huggingface/datasets/issues/3540
1,094,900,336
I_kwDODunzps5BQtpw
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2022-01-06T02:13:42Z
2022-01-06T02:17:39Z
null
null
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import Dataset as HFDataset class ADataset(Dataset): def __init__(self, data): super().__init__() self.data = data def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class MDataset(): def __init__(self, tokenizer: AutoTokenizer, data_args, training_args): self.train_dataset = ADataset(data_args) self.tokenizer = tokenizer self.data_args = data_args self.train_dataset = self.train_dataset.map( self.process_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on train dataset", ) def process_function(self, examples): sentences = [" ".join(sample[0][3]) for sample in examples] tokenized = self.tokenizer( sentences, max_length=self.max_seq_len, padding=self.padding, truncation=True) ``` But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'. so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? Thanks in advance!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4094/comments
https://api.github.com/repos/huggingface/datasets/issues/4094/events
https://github.com/huggingface/datasets/issues/4094
1,192,534,414
I_kwDODunzps5HFKGO
4,094
Helo Mayfrends
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
0
2022-04-05T02:42:57Z
2022-04-05T07:16:42Z
2022-04-05T07:16:42Z
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4094/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5399/comments
https://api.github.com/repos/huggingface/datasets/issues/5399/events
https://github.com/huggingface/datasets/issues/5399
1,515,548,427
I_kwDODunzps5aVW8L
5,399
Got disconnected from remote data host. Retrying in 5sec [2/20]
[]
closed
false
null
0
2023-01-01T13:00:11Z
2023-01-02T07:21:52Z
2023-01-02T07:21:52Z
null
### Describe the bug While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs ### Steps to reproduce the bug ``` df = pd.read_csv('x.csv', encoding='utf-8-sig') features = Features({ 'link': Image(decode=True), 'caption': Value(dtype='string'), }) #make sure u r logged in to HF ds = Dataset.from_pandas(df, features=features) ds.features ds.push_to_hub("x/x") ``` I got the below error and It always stops at the same progress ``` 100%|██████████| 4/4 [23:53<00:00, 358.48s/ba] 100%|██████████| 4/4 [24:37<00:00, 369.47s/ba]%|▍ | 1/22 [00:06<02:09, 6.16s/it] 100%|██████████| 4/4 [25:00<00:00, 375.15s/ba]%|▉ | 2/22 [25:54<2:36:15, 468.80s/it] 100%|██████████| 4/4 [24:53<00:00, 373.29s/ba]%|█▎ | 3/22 [51:01<4:07:07, 780.39s/it] 100%|██████████| 4/4 [24:01<00:00, 360.34s/ba]%|█▊ | 4/22 [1:17:00<5:04:07, 1013.74s/it] 100%|██████████| 4/4 [23:59<00:00, 359.91s/ba]%|██▎ | 5/22 [1:41:07<5:24:06, 1143.90s/it] 100%|██████████| 4/4 [24:16<00:00, 364.06s/ba]%|██▋ | 6/22 [2:05:14<5:29:15, 1234.74s/it] 100%|██████████| 4/4 [25:24<00:00, 381.10s/ba]%|███▏ | 7/22 [2:29:38<5:25:52, 1303.52s/it] 100%|██████████| 4/4 [25:24<00:00, 381.24s/ba]%|███▋ | 8/22 [2:56:02<5:23:46, 1387.58s/it] 100%|██████████| 4/4 [25:08<00:00, 377.23s/ba]%|████ | 9/22 [3:22:24<5:13:17, 1445.97s/it] 100%|██████████| 4/4 [24:11<00:00, 362.87s/ba]%|████▌ | 10/22 [3:48:24<4:56:02, 1480.19s/it] 100%|██████████| 4/4 [24:44<00:00, 371.11s/ba]%|█████ | 11/22 [4:12:42<4:30:10, 1473.66s/it] 100%|██████████| 4/4 [24:35<00:00, 368.81s/ba]%|█████▍ | 12/22 [4:37:34<4:06:29, 1478.98s/it] 100%|██████████| 4/4 [24:02<00:00, 360.67s/ba]%|█████▉ | 13/22 [5:03:24<3:45:04, 1500.45s/it] 100%|██████████| 4/4 [24:07<00:00, 361.78s/ba]%|██████▎ | 14/22 [5:27:33<3:17:59, 1484.97s/it] 100%|██████████| 4/4 [23:39<00:00, 354.85s/ba]%|██████▊ | 15/22 [5:51:48<2:52:10, 1475.82s/it] Pushing dataset shards to the dataset hub: 73%|███████▎ | 16/22 [6:16:58<2:28:37, 1486.31s/it]Got disconnected from remote data host. Retrying in 5sec [1/20] Got disconnected from remote data host. Retrying in 5sec [2/20] Got disconnected from remote data host. Retrying in 5sec [3/20] Got disconnected from remote data host. Retrying in 5sec [4/20] Got disconnected from remote data host. Retrying in 5sec [5/20] Got disconnected from remote data host. Retrying in 5sec [6/20] Got disconnected from remote data host. Retrying in 5sec [7/20] Got disconnected from remote data host. Retrying in 5sec [8/20] Got disconnected from remote data host. Retrying in 5sec [9/20] ... Got disconnected from remote data host. Retrying in 5sec [19/20] Got disconnected from remote data host. Retrying in 5sec [20/20] 75%|███████▌ | 3/4 [24:47<08:15, 495.86s/ba] Pushing dataset shards to the dataset hub: 73%|███████▎ | 16/22 [6:41:46<2:30:39, 1506.65s/it] Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-1-dbf8530779e9> in <module> 16 ds.features ``` ### Expected behavior I was trying to upload an image dataset and expected it to be fully uploaded ### Environment info - `datasets` version: 2.8.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5399/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5399/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5373/comments
https://api.github.com/repos/huggingface/datasets/issues/5373/events
https://github.com/huggingface/datasets/pull/5373
1,501,484,197
PR_kwDODunzps5FtRU4
5,373
Simplify skipping
[]
closed
false
null
1
2022-12-17T17:23:52Z
2022-12-18T21:43:31Z
2022-12-18T21:40:21Z
null
Was hoping to find a way to speed up the skipping as I'm running into bottlenecks skipping 100M examples on C4 (it takes 12 hours to skip), but didn't find anything better than this small change :( Maybe there's a way to directly skip whole shards to speed it up? 🧐
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5373/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5373/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5373.diff", "html_url": "https://github.com/huggingface/datasets/pull/5373", "merged_at": "2022-12-18T21:40:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/5373.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5373" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/68
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/68/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/68/comments
https://api.github.com/repos/huggingface/datasets/issues/68/events
https://github.com/huggingface/datasets/pull/68
614,882,655
MDExOlB1bGxSZXF1ZXN0NDE1MzQ3NTgw
68
[CSV] re-add csv
[]
closed
false
null
0
2020-05-08T17:38:29Z
2020-05-08T17:40:48Z
2020-05-08T17:40:46Z
null
Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests. @lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/68/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/68/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/68.diff", "html_url": "https://github.com/huggingface/datasets/pull/68", "merged_at": "2020-05-08T17:40:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/68.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/68" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4421/comments
https://api.github.com/repos/huggingface/datasets/issues/4421/events
https://github.com/huggingface/datasets/pull/4421
1,253,059,467
PR_kwDODunzps44szxR
4,421
Add extractor for bzip2-compressed files
[]
closed
false
null
0
2022-05-30T19:19:40Z
2022-06-06T15:22:50Z
2022-06-06T15:22:50Z
null
This change enables loading bzipped datasets, just like any other compressed dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4421/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4421.diff", "html_url": "https://github.com/huggingface/datasets/pull/4421", "merged_at": "2022-06-06T15:22:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4421.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4421" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3704/comments
https://api.github.com/repos/huggingface/datasets/issues/3704/events
https://github.com/huggingface/datasets/issues/3704
1,132,042,631
I_kwDODunzps5DeZmH
3,704
OSCAR-2109 datasets are misaligned and truncated
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
10
2022-02-11T08:14:59Z
2022-03-17T18:01:04Z
2022-03-16T16:21:28Z
null
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the particular (mis)alignment is in various configurations: ```python from datasets import load_dataset dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_fi", split="train", use_auth_token=True) entry = dataset[0] # entry["text"] is from fi_part_3.txt.gz # entry["meta"] is from fi_meta_part_2.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_no", split="train", use_auth_token=True) entry = dataset[900000] # entry["text"] is from no_part_3.txt.gz and contains a blank line # entry["meta"] is from no_meta_part_1.jsonl.gz dataset = load_dataset("oscar-corpus/OSCAR-2109", "deduplicated_mk", split="train", streaming=True, use_auth_token=True) # 9088 texts in the dataset are empty ``` For `deduplicated_fi`, all exported raw texts from the dataset are 17GB rather than 20GB as reported in the data splits overview table. The token count with `wc -w` for the raw texts is 2,067,556,874 rather than the expected 2,357,264,196 from the data splits table. For `deduplicated_no` all exported raw texts contain 624,040,887 rather than the expected 776,354,517 tokens. For `deduplicated_mk` it is 122,236,936 rather than 134,544,934 tokens. I'm not expecting the `wc -w` counts to line up exactly with the data splits table, but for comparison the `wc -w` count for `deduplicated_mk` on the raw texts is 134,545,424. ## Issues * The meta / text files are not paired correctly when loading, so the extracted texts do not have the right offsets, the metadata is not associated with the correct text, and the text files may not be processed to the end or may be processed beyond the end (empty texts). * The line count offset is not reset per file so the texts aren't aligned to the right offsets in any parts beyond the first part, leading to truncation when in effect blank lines are not skipped. * Non-unix newline characters are treated as newlines when reading the text files while the metadata only counts unix newlines for its line offsets, leading to further misalignments between the metadata and the extracted texts, and which also results in truncation. ## Expected results All texts from the OSCAR release are extracted according to the metadata and aligned with the correct metadata. ## Fixes Not necessarily the exact fixes/checks you may want to use (I didn't test all languages or do any cross-platform testing, I'm not sure all the details are compatible with streaming), however to highlight the issues: ```diff diff --git a/OSCAR-2109.py b/OSCAR-2109.py index bbac1076..5eee8de7 100644 --- a/OSCAR-2109.py +++ b/OSCAR-2109.py @@ -20,6 +20,7 @@ import collections import gzip import json +import os import datasets @@ -387,9 +388,20 @@ class Oscar2109(datasets.GeneratorBasedBuilder): with open(checksum_file, encoding="utf-8") as f: data_filenames = [line.split()[1] for line in f if line] data_urls = [self.config.base_data_path + data_filename for data_filename in data_filenames] - text_files = dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")]) - metadata_files = dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")]) + # sort filenames so corresponding parts are aligned + text_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".txt.gz")])) + metadata_files = sorted(dl_manager.download([url for url in data_urls if url.endswith(".jsonl.gz")])) + assert len(text_files) == len(metadata_files) metadata_and_text_files = list(zip(metadata_files, text_files)) + for meta_path, text_path in metadata_and_text_files: + # check that meta/text part numbers are the same + if "part" in os.path.basename(text_path): + assert ( + os.path.basename(text_path).replace(".txt.gz", "").split("_")[-1] + == os.path.basename(meta_path).replace(".jsonl.gz", "").split("_")[-1] + ) + else: + assert len(metadata_and_text_files) == 1 return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"metadata_and_text_files": metadata_and_text_files}), ] @@ -397,10 +409,14 @@ class Oscar2109(datasets.GeneratorBasedBuilder): def _generate_examples(self, metadata_and_text_files): """This function returns the examples in the raw (text) form by iterating on all the files.""" id_ = 0 - offset = 0 for meta_path, text_path in metadata_and_text_files: + # line offsets are per text file + offset = 0 logger.info("generating examples from = %s", text_path) - with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8") as text_f: + # some texts contain non-Unix newlines that should not be + # interpreted as line breaks for the line counts in the metadata + # with readline() + with gzip.open(open(text_path, "rb"), "rt", encoding="utf-8", newline="\n") as text_f: with gzip.open(open(meta_path, "rb"), "rt", encoding="utf-8") as meta_f: for line in meta_f: # read meta @@ -411,7 +427,12 @@ class Oscar2109(datasets.GeneratorBasedBuilder): offset += 1 text_f.readline() # read text - text = "".join([text_f.readline() for _ in range(meta["nb_sentences"])]).rstrip() + text_lines = [text_f.readline() for _ in range(meta["nb_sentences"])] + # all lines contain text (no blank lines or EOF) + assert all(text_lines) + assert "\n" not in text_lines offset += meta["nb_sentences"] + # only strip the trailing newline + text = "".join(text_lines).rstrip("\n") yield id_, {"id": id_, "text": text, "meta": meta} id_ += 1 ``` I've tested this with a number of smaller deduplicated languages with 1-20 parts and the resulting datasets looked correct in terms of word count and size when compared to the data splits table and raw texts, and the text/metadata alignments were correct in all my spot checks. However, there are many many languages I didn't test and I'm not sure that there aren't any texts containing blank lines in the corpus, for instance. For the cases I tested, the assertions related to blank lines and EOF made it easier to verify that the text and metadata were aligned as intended, since there would be little chance of spurious alignments of variable-length texts across so much data.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3704/timeline
null
completed
null
null
false
[ "Hi @adrianeboyd, thanks for reporting.\r\n\r\nThere is indeed a bug in that community dataset:\r\nLine:\r\n```python\r\nmetadata_and_text_files = list(zip(metadata_files, text_files))\r\n``` \r\nshould be replaced with\r\n```python\r\nmetadata_and_text_files = list(zip(sorted(metadata_files), sorted(text_files)))\...
https://api.github.com/repos/huggingface/datasets/issues/2501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2501/comments
https://api.github.com/repos/huggingface/datasets/issues/2501/events
https://github.com/huggingface/datasets/pull/2501
920,579,634
MDExOlB1bGxSZXF1ZXN0NjY5NzA3Nzc0
2,501
Add Zenodo metadata file with license
[]
closed
false
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
0
2021-06-14T16:28:12Z
2021-06-14T16:49:42Z
2021-06-14T16:49:42Z
null
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2501/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2501/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2501.diff", "html_url": "https://github.com/huggingface/datasets/pull/2501", "merged_at": "2021-06-14T16:49:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/2501.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2501" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3886/comments
https://api.github.com/repos/huggingface/datasets/issues/3886/events
https://github.com/huggingface/datasets/pull/3886
1,165,223,319
PR_kwDODunzps40PO6W
3,886
Retry HfApi call inside push_to_hub when 504 error
[]
closed
false
null
8
2022-03-10T13:24:40Z
2022-03-16T09:00:56Z
2022-03-15T16:19:50Z
null
Ass suggested by @lhoestq in #3872, this PR: - Implements a retry function - Retries HfApi call inside `push_to_hub` when 504 error. To be agreed: - max_retries = 2 (at 0.5 and 1 seconds) Fix #3872.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3886/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3886/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3886.diff", "html_url": "https://github.com/huggingface/datasets/pull/3886", "merged_at": "2022-03-15T16:19:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/3886.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3886" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3886). All of your documentation changes will be reflected on that endpoint.", "I made it more robust by increasing the wait time, and I also added some logs when a request is retried. Let me know if it's ok for you", "At the...
https://api.github.com/repos/huggingface/datasets/issues/263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/263/comments
https://api.github.com/repos/huggingface/datasets/issues/263/events
https://github.com/huggingface/datasets/issues/263
637,028,015
MDU6SXNzdWU2MzcwMjgwMTU=
263
[Feature request] Support for external modality for language datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
closed
false
null
5
2020-06-11T13:42:18Z
2022-02-10T13:26:35Z
2022-02-10T13:26:35Z
null
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data. # Language + Vision ## Use case Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset. Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. For all these types of features, people use one of the following formats: 1. [HD5F](https://pypi.org/project/h5py/) 2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html) 3. [LMDB](https://lmdb.readthedocs.io/en/release/) ## Implementation considerations I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following: 1. Download dataset 2. Download images associated with the dataset 3. Write a script that generates the visual features for every image and store them in a specific file 4. Create a DataLoader that maps the visual features to the corresponding language example In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it. For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array. Looking forward to hearing your thoughts about it!
{ "+1": 18, "-1": 0, "confused": 0, "eyes": 4, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 23, "url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/263/timeline
null
completed
null
null
false
[ "Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We...
https://api.github.com/repos/huggingface/datasets/issues/1767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1767/comments
https://api.github.com/repos/huggingface/datasets/issues/1767/events
https://github.com/huggingface/datasets/pull/1767
792,068,497
MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2
1,767
Add Librispeech ASR
[]
closed
false
null
1
2021-01-22T14:54:37Z
2021-01-25T20:38:07Z
2021-01-25T20:37:42Z
null
This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360". As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` format, the speech files are not directly prepared to a float32-array, but instead just the path to the array file is stored.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1767/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1767.diff", "html_url": "https://github.com/huggingface/datasets/pull/1767", "merged_at": "2021-01-25T20:37:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/1767.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1767" }
true
[ "> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https://hydrogenaud.io/index.php?topic=118685.0) for example)....
https://api.github.com/repos/huggingface/datasets/issues/3525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3525/comments
https://api.github.com/repos/huggingface/datasets/issues/3525/events
https://github.com/huggingface/datasets/pull/3525
1,093,831,268
PR_kwDODunzps4wiL8p
3,525
Adding license information for Openbookcorpus
[]
closed
false
null
3
2022-01-04T23:20:36Z
2022-04-20T09:54:30Z
2022-04-20T09:48:10Z
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3525/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3525.diff", "html_url": "https://github.com/huggingface/datasets/pull/3525", "merged_at": "2022-04-20T09:48:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/3525.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3525" }
true
[ "The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their...
https://api.github.com/repos/huggingface/datasets/issues/2598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2598/comments
https://api.github.com/repos/huggingface/datasets/issues/2598/events
https://github.com/huggingface/datasets/issues/2598
937,930,632
MDU6SXNzdWU5Mzc5MzA2MzI=
2,598
Unable to download omp dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2021-07-06T14:00:52Z
2021-07-07T12:56:35Z
2021-07-07T12:56:35Z
null
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2598/timeline
null
completed
null
null
false
[ "Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp datas...
https://api.github.com/repos/huggingface/datasets/issues/800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/800/comments
https://api.github.com/repos/huggingface/datasets/issues/800/events
https://github.com/huggingface/datasets/pull/800
735,772,775
MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3
800
Update loading_metrics.rst
[]
closed
false
null
0
2020-11-04T02:57:11Z
2020-11-11T15:28:32Z
2020-11-11T15:28:32Z
null
Minor bug
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/800/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/800.diff", "html_url": "https://github.com/huggingface/datasets/pull/800", "merged_at": "2020-11-11T15:28:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/800" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1503/comments
https://api.github.com/repos/huggingface/datasets/issues/1503/events
https://github.com/huggingface/datasets/pull/1503
763,667,489
MDExOlB1bGxSZXF1ZXN0NTM4MDUxNDM2
1,503
Adding COVID QA dataset in Chinese and English from UC SanDiego
[]
closed
false
null
1
2020-12-12T12:02:48Z
2021-02-16T05:29:18Z
2020-12-17T15:29:26Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1503/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1503/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1503.diff", "html_url": "https://github.com/huggingface/datasets/pull/1503", "merged_at": "2020-12-17T15:29:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1503.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1503" }
true
[ "Changed the pre-processing based on the comments raised in [PR-1482](https://github.com/huggingface/datasets/pull/1482).The below command is passing in my local environment:\r\n\r\n`python datasets-cli test datasets/covid_qa_ucsd/ --save_infos --all_configs --data_dir ~/Downloads/Medical-Dialogue-Dataset/CovidDail...
https://api.github.com/repos/huggingface/datasets/issues/5892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5892/comments
https://api.github.com/repos/huggingface/datasets/issues/5892/events
https://github.com/huggingface/datasets/issues/5892
1,722,503,824
I_kwDODunzps5mq1KQ
5,892
User access requests with manual review do not notify the dataset owner
[]
closed
false
null
2
2023-05-23T17:27:46Z
2023-07-21T13:55:37Z
2023-07-21T13:55:36Z
null
### Describe the bug When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane. ### Steps to reproduce the bug 1. Enable a dataset's user access requests 2. Set to Manual Review 3. Ask another HF user to request access to the dataset 4. Dataset owner is not notified ### Expected behavior The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled. ### Environment info n/a
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5892/timeline
null
completed
null
null
false
[ "cc @SBrandeis", "I think this has been addressed.\r\n\r\nPlease open a new issue if you are still not getting notified." ]
https://api.github.com/repos/huggingface/datasets/issues/2163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2163/comments
https://api.github.com/repos/huggingface/datasets/issues/2163/events
https://github.com/huggingface/datasets/pull/2163
849,669,366
MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz
2,163
Concat only unique fields in DatasetInfo.from_merge
[]
closed
false
null
3
2021-04-03T14:31:30Z
2021-04-06T14:40:00Z
2021-04-06T14:39:59Z
null
I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case. Fixes #2103
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2163/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2163/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2163.diff", "html_url": "https://github.com/huggingface/datasets/pull/2163", "merged_at": "2021-04-06T14:39:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2163.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2163" }
true
[ "Hi @mariosasko,\r\nJust came across this PR and I was wondering if we can use\r\n`description = \"\\n\\n\".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`\r\n\r\nThis will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but ...
https://api.github.com/repos/huggingface/datasets/issues/1805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1805/comments
https://api.github.com/repos/huggingface/datasets/issues/1805/events
https://github.com/huggingface/datasets/issues/1805
798,498,053
MDU6SXNzdWU3OTg0OTgwNTM=
1,805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
[]
closed
false
null
2
2021-02-01T16:14:17Z
2021-03-06T14:32:46Z
2021-03-06T14:32:46Z
null
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of increased amperage in the planetary world (..)', 'option_id': 'A', 'option_text': 'Planetary density will decrease.'}, (...)]} ``` The `options` value is always an list with 4 options, each one is a dict with `option_context`; `option_id` and `option_text`. I would like to overwrite the `option_context` of each instance of my dataset for a dpr result that I am developing. Then, I trained a model already and save it in a FAISS index ``` dpr_dataset = load_dataset( "text", data_files=ARC_CORPUS_TEXT, cache_dir=CACHE_DIR, split="train[:100%]", ) dpr_dataset.load_faiss_index("embeddings", f"{ARC_CORPUS_FAISS}") torch.set_grad_enabled(False) ``` Then, as a processor of my dataset, I created a map function that calls the `dpr_dataset` for each _option_ ``` def generate_context(example): question_text = example['question'] for option in example['options']: question_with_option = question_text + " " + option['option_text'] tokenize_text = question_tokenizer(question_with_option, return_tensors="pt").to(device) question_embed = ( question_encoder(**tokenize_text) )[0][0].cpu().numpy() _, retrieved_examples = dpr_dataset.get_nearest_examples( "embeddings", question_embed, k=10 ) # option["option_context"] = retrieved_examples["text"] # option["option_context"] = " ".join(option["option_context"]).strip() #result_dict = { # 'example_id': example['example_id'], # 'answer': example['answer'], # 'question': question_text, #options': example['options'] # } return example ``` I intentionally commented on this portion of the code. But when I call the `map` method, `ds_with_context = dataset.map(generate_context,load_from_cache_file=False)` It calls the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-55-75a458ce205c> in <module> ----> 1 ds_with_context = dataset.map(generate_context,load_from_cache_file=False) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1257 fn_kwargs=fn_kwargs, 1258 new_fingerprint=new_fingerprint, -> 1259 update_data=update_data, 1260 ) 1261 else: ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 155 } 156 # apply actual function --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 159 # re-apply format to the output ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name 157 kwargs[fingerprint_name] = update_fingerprint( --> 158 self._fingerprint, transform, kwargs_for_fingerprint 159 ) 160 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) 103 for key in sorted(transform_args): 104 hasher.update(key) --> 105 hasher.update(transform_args[key]) 106 return hasher.hexdigest() 107 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value) 55 def update(self, value): 56 self.m.update(f"=={type(value)}==".encode("utf8")) ---> 57 self.m.update(self.hash(value).encode("utf-8")) 58 59 def hexdigest(self): ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj) 387 file = StringIO() 388 with _no_cache_fields(obj): --> 389 dump(obj, file) 390 return file.getvalue() 391 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file) 359 def dump(obj, file): 360 """pickle an object to a file""" --> 361 Pickler(file, recurse=True).dump(obj) 362 return 363 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj) 452 raise PicklingError(msg) 453 else: --> 454 StockPickler.dump(self, obj) 455 stack.clear() # clear record of 'recursion-sensitive' pickled objects 456 return /usr/lib/python3.7/pickle.py in dump(self, obj) 435 if self.proto >= 4: 436 self.framer.start_framing() --> 437 self.save(obj) 438 self.write(STOP) 439 self.framer.end_framing() /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in save_function(pickler, obj) 554 dill._dill._create_function, 555 (obj.__code__, globs, obj.__name__, obj.__defaults__, obj.__closure__, obj.__dict__, fkwdefaults), --> 556 obj=obj, 557 ) 558 else: /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 636 else: 637 save(func) --> 638 save(args) 639 write(REDUCE) 640 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /usr/lib/python3.7/pickle.py in save_tuple(self, obj) 784 write(MARK) 785 for element in obj: --> 786 save(element) 787 788 if id(obj) in memo: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 885 k, v = tmp[0] 886 save(k) --> 887 save(v) 888 write(SETITEM) 889 # else tmp is empty, and we're done /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 880 for k, v in tmp: 881 save(k) --> 882 save(v) 883 write(SETITEMS) 884 elif n: /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 939 # we only care about session the first pass thru 940 pickler._session = False --> 941 StockPickler.save_dict(pickler, obj) 942 log.info("# D2") 943 return /usr/lib/python3.7/pickle.py in save_dict(self, obj) 854 855 self.memoize(obj) --> 856 self._batch_setitems(obj.items()) 857 858 dispatch[dict] = save_dict /usr/lib/python3.7/pickle.py in _batch_setitems(self, items) 885 k, v = tmp[0] 886 save(k) --> 887 save(v) 888 write(SETITEM) 889 # else tmp is empty, and we're done /usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 522 reduce = getattr(obj, "__reduce_ex__", None) 523 if reduce is not None: --> 524 rv = reduce(self.proto) 525 else: 526 reduce = getattr(obj, "__reduce__", None) TypeError: can't pickle SwigPyObject objects ``` Which I have no idea how to solve/deal with it
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1805/timeline
null
completed
null
null
false
[ "Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next re...
https://api.github.com/repos/huggingface/datasets/issues/5799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5799/comments
https://api.github.com/repos/huggingface/datasets/issues/5799/events
https://github.com/huggingface/datasets/issues/5799
1,686,334,572
I_kwDODunzps5kg2xs
5,799
Files downloaded to cache do not respect umask
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2023-04-27T08:06:05Z
2023-04-27T09:30:17Z
2023-04-27T09:30:17Z
null
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5799/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4817/comments
https://api.github.com/repos/huggingface/datasets/issues/4817/events
https://github.com/huggingface/datasets/issues/4817
1,334,572,163
I_kwDODunzps5Pi_SD
4,817
Outdated Link for mkqa Dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-08-10T12:45:45Z
2022-08-11T09:37:52Z
2022-08-11T09:37:52Z
null
## Describe the bug The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main). ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mkqa") ``` ## Expected results downloads the dataset ## Actual results ```python Downloading builder script: 4.79k/? [00:00<00:00, 201kB/s] Downloading metadata: 13.2k/? [00:00<00:00, 504kB/s] Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d... Downloading data files: 0% 0/1 [00:00<?, ?it/s] --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Input In [3], in <cell line: 3>() 1 from datasets import load_dataset ----> 3 dataset = load_dataset("mkqa") File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1745 # Download and prepare data -> 1746 builder_instance.download_and_prepare( 1747 download_config=download_config, 1748 download_mode=download_mode, 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, 1751 use_auth_token=use_auth_token, 1752 ) 1754 # Build dataset for splits 1755 keep_in_memory = ( 1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1757 ) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager) 128 # download and extract URLs 129 urls_to_download = _URLS --> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download) 132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})] File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls) 415 def download_and_extract(self, url_or_urls): 416 """Download and extract given url_or_urls. 417 418 Is roughly equivalent to: (...) 429 extracted_path(s): `str`, extracted paths of given URL(s). 430 """ --> 431 return self.extract(self.download(url_or_urls)) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls) 306 download_func = partial(self._download, download_config=download_config) 308 start_time = datetime.now() --> 309 downloaded_path_or_paths = map_nested( 310 download_func, 311 url_or_urls, 312 map_tuple=True, 313 num_proc=download_config.num_proc, 314 disable_tqdm=not is_progress_bar_enabled(), 315 desc="Downloading data files", 316 ) 317 duration = datetime.now() - start_time 318 logger.info(f"Downloading took {duration.total_seconds() // 60} min") File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 391 num_proc = 1 392 if num_proc <= 1 or len(iterable) <= num_proc: --> 393 mapped = [ 394 _single_map_nested((function, obj, types, None, True, None)) 395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 396 ] 397 else: 398 split_kwds = [] # We organize the splits ourselve (contiguous splits) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0) 391 num_proc = 1 392 if num_proc <= 1 or len(iterable) <= num_proc: 393 mapped = [ --> 394 _single_map_nested((function, obj, types, None, True, None)) 395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 396 ] 397 else: 398 split_kwds = [] # We organize the splits ourselve (contiguous splits) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args) 328 # Singleton first to spare some computation 329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 330 return function(data_struct) 332 # Reduce logging to keep things readable in multiprocessing with tqdm 333 if rank is not None and logging.get_verbosity() < logging.WARNING: File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config) 332 if is_relative_path(url_or_filename): 333 # append the relative path to the base_path 334 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 335 return cached_path(url_or_filename, download_config=download_config) File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs) 181 url_or_filename = str(url_or_filename) 183 if is_remote_url(url_or_filename): 184 # URL, so get it from the cache (downloading if necessary) --> 185 output_path = get_from_cache( 186 url_or_filename, 187 cache_dir=cache_dir, 188 force_download=download_config.force_download, 189 proxies=download_config.proxies, 190 resume_download=download_config.resume_download, 191 user_agent=download_config.user_agent, 192 local_files_only=download_config.local_files_only, 193 use_etag=download_config.use_etag, 194 max_retries=download_config.max_retries, 195 use_auth_token=download_config.use_auth_token, 196 ignore_url_params=download_config.ignore_url_params, 197 download_desc=download_config.download_desc, 198 ) 199 elif os.path.exists(url_or_filename): 200 # File, and it exists. 201 output_path = url_or_filename File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 525 raise FileNotFoundError( 526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" 527 " disabled. To enable file online look-ups, set 'local_files_only' to False." 528 ) 529 elif response is not None and response.status_code == 404: --> 530 raise FileNotFoundError(f"Couldn't find file at {url}") 531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 532 if head_error is not None: FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4817/timeline
null
completed
null
null
false
[ "Thanks for reporting @liaeh, we are investigating this. " ]
https://api.github.com/repos/huggingface/datasets/issues/900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/900/comments
https://api.github.com/repos/huggingface/datasets/issues/900/events
https://github.com/huggingface/datasets/issues/900
752,214,066
MDU6SXNzdWU3NTIyMTQwNjY=
900
datasets.load_dataset() custom chaching directory bug
[]
closed
false
null
1
2020-11-27T12:18:53Z
2020-11-29T22:48:53Z
2020-11-29T22:48:53Z
null
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets from pathlib import Path validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data")) ``` ## The output: * The dataset is downloaded to my home directory's `.cache` * A new empty directory named "`natural_questions` is created in the specified directory `.data` * `tree data` in the shell outputs: ``` data └── natural_questions └── default └── 0.0.2 3 directories, 0 files ``` The output: ``` Downloading: 8.61kB [00:00, 5.11MB/s] Downloading: 13.6kB [00:00, 7.89MB/s] Using custom data configuration default Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8 3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... Downloading: 100%|██████████████████████████████████████████████████| 13.6k/13.6k [00:00<00:00, 1.51MB/s] Downloading: 7%|███▎ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s] ``` ## Expected behaviour: The dataset "Natural Questions" should be downloaded to the directory "./data"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/900/timeline
null
completed
null
null
false
[ "Thanks for reporting ! I'm looking into it." ]
https://api.github.com/repos/huggingface/datasets/issues/5326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5326/comments
https://api.github.com/repos/huggingface/datasets/issues/5326/events
https://github.com/huggingface/datasets/issues/5326
1,471,634,168
I_kwDODunzps5Xt1r4
5,326
No documentation for main branch is built
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2022-12-01T16:50:58Z
2022-12-02T16:26:01Z
2022-12-02T16:26:01Z
null
Since: - #5250 - Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6 the docs for main branch are no longer built. The change introduced only triggers the docs building for releases.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5326/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/37
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/37/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/37/comments
https://api.github.com/repos/huggingface/datasets/issues/37/events
https://github.com/huggingface/datasets/pull/37
611,670,295
MDExOlB1bGxSZXF1ZXN0NDEyNzg5MjQ4
37
[Datasets ToDo-List] add datasets
[]
closed
false
null
8
2020-05-04T07:47:39Z
2022-10-04T09:32:17Z
2020-05-08T13:48:23Z
null
## Description This PR acts as a dashboard to see which datasets are added to the library and work. Cicle-ci should always be green so that we can be sure that newly added datasets are functional. This PR should not be merged. ## Progress **For the following datasets the test commands**: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` **passes**. - [x] Squad - [x] Sentiment140 - [x] XNLI - [x] Crime_and_Punish - [x] movie_rationales - [x] ai2_arc - [x] anli - [x] event2Mind - [x] Fquad - [x] blimp - [x] empathetic_dialogues - [x] cosmos_qa - [x] xquad - [x] blog_authorship_corpus - [x] SNLI - [x] break_data - [x] SQuAD v2 - [x] cfq - [x] eraser_multi_rc - [x] Glue - [x] Tydiqa - [x] wiki_qa - [x] wikitext - [x] winogrande - [x] wiqa - [x] esnli - [x] civil_comments - [x] commonsense_qa - [x] com_qa - [x] coqa - [x] wiki_split - [x] cos_e - [x] xcopa - [x] quarel - [x] quartz - [x] squad_it - [x] quoref - [x] squad_pt - [x] cornell_movie_dialog - [x] SciQ - [x] Scifact - [x] hellaswag - [x] ted_multi (in translate) - [x] Aeslc (summarization) - [x] drop - [x] gap - [x] hansard - [x] opinosis - [x] MLQA - [x] math_dataset ## How-To-Add a dataset **Before adding a dataset make sure that your branch is up to date**: 1. `git checkout add_datasets` 2. `git pull` **Add a dataset via the `convert_dataset.sh` bash script:** Running `bash convert_dataset.sh <file/to/tfds/datascript.py>` (*e.g.* `bash convert_dataset.sh ../tensorflow-datasets/tensorflow_datasets/text/movie_rationales.py`) will automatically run all the steps mentioned in **Add a dataset manually** below. Make sure that you run `convert_dataset.sh` from the root folder of `nlp`. The conversion script should work almost always for step 1): "convert dataset script from tfds to nlp format" and 2) "create checksum file" and step 3) "make style". It can also sometimes automatically run step 4) "create the correct dummy data from tfds", but this will only work if a) there is either no config name or only one config name and b) the `tfds testing/test_data/fake_example` is in the correct form. Nevertheless, the script should always be run in the beginning until an error occurs to be more efficient. If the conversion script does not work or fails at some step, then you can run the steps manually as follows: **Add a dataset manually** Make sure you run all of the following commands from the root of your `nlp` git clone. Also make sure that you changed to this branch: ``` git checkout add_datasets ``` 1) the tfds datascript file should be converted to `nlp` style: ``` python nlp-cli convert --tfds_path <path/to/tensorflow_datasets/text/your_dataset_name>.py --nlp_directory datasets/nlp ``` This will convert the tdfs script and create a folder with the correct name. 2) the checksum file should be added. Use the command: ``` python nlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs ``` A checksums.txt file should be created in your folder and the structure should look as follows: squad/ ├── squad.py/ └── urls_checksums/ ...........└── checksums.txt Delete the created `*.lock` file afterward - it should not be uploaded to AWS. 3) run black and isort on your newly added datascript files so that they look nice: ``` make style ``` 4) the dummy data should be added. For this it might be useful to take a look into the structure of other examples as shown in the PR here and at `<path/to/tensorflow_datasets/testing/test_data/test_data/fake_examples>` whether the same data can be used. 5) the data can be uploaded to AWS using the command ``` aws s3 cp datasets/nlp/<your-dataset-folder> s3://datasets.huggingface.co/nlp/<your-dataset-folder> --recursive ``` 6) check whether all works as expected using: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` 7) push to this PR and rerun the circle ci workflow to check whether circle ci stays green. 8) Edit this commend and tick off your newly added dataset :-) ## TODO-list Maybe we can add a TODO-list here for everybody that feels like adding new datasets so that we will not add the same datasets. Here a link to available datasets: https://docs.google.com/spreadsheets/d/1zOtEqOrnVQwdgkC4nJrTY6d-Av02u0XFzeKAtBM2fUI/edit#gid=0 Patrick: - [ ] boolq - *weird download link* - [ ] c4 - *beam dataset*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/37/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/37/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/37.diff", "html_url": "https://github.com/huggingface/datasets/pull/37", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/37.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/37" }
true
[ "Note:\r\n```\r\nnlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs\r\n```\r\ndirectly saves the checksums in the right place, and runs for all the dataset configurations.", "@patrickvonplaten can you provide the add the link to the PR for the dummy data? ", "https://github.com/huggi...
https://api.github.com/repos/huggingface/datasets/issues/821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/821/comments
https://api.github.com/repos/huggingface/datasets/issues/821/events
https://github.com/huggingface/datasets/issues/821
739,506,859
MDU6SXNzdWU3Mzk1MDY4NTk=
821
`kor_nli` dataset doesn't being loaded properly
[]
closed
false
null
0
2020-11-10T02:04:12Z
2020-11-16T13:59:12Z
2020-11-16T13:59:12Z
null
There are two issues from `kor_nli` dataset 1. csv.DictReader failed to split features by tab - Should not exist `None` value in label feature, but there it is. ```python kor_nli_train['train'].unique('gold_label') # ['neutral', 'entailment', 'contradiction', None] ``` - I found a reason why there is `None` values in label feature as following code ```python from datasets import load_dataset kor_nli_train = load_dataset('kor_nli', 'multi_nli') for idx, example in enumerate(kor_nli_train['train']): if example['gold_label'] is None: print(idx, example) break # 16835 {'gold_label': None, 'sentence1': '그는 전쟁 전에 가벼운 벅스킨 암말을 가지고 달리기 위해 우유처럼 하얀 스터드를 넣었다.\t전쟁 전에 다인종 여성들과 함께 있는 백인 남자가 있었다.\tentailment\n슬림은 재빨리 옷을 입었고, 순간적으로 미지근한 물을 뿌릴 수 있는 아침 세탁물을 기꺼이 가두었다.\t슬림은 직장에 늦었다.\tneutral\n뉴욕에서 그 식사를 해봤는데, 거기서 소고기의 멋진 소고기 부분을 요리하고 바베큐로 만든 널빤지 같은 걸 가져왔는데, 정말 대단해.\t그들이 거기서 요리하는 쇠고기는 역겹다. 거기서 절대 먹지 마라.\tcontradiction\n판매원의 죽음에서 브라이언 데네히... 크리스 켈리\t크리스 켈리는 세일즈맨의 죽음을 언급하지 않는다.\tcontradiction\n그러는 동안 요리사는 그냥 화가 났어.\t스튜가 끓는 동안 요리사는 화가 났다.\tneutral\n마지막 로마의 맹공격 전날 밤, 900명 이상의 유대인 수비수들이 로마인들에게 그들을 사로잡는 승리를 주기 보다는 대량 자살을 저질렀다.\t로마인들이 그들의 포획에 승리하도록 내버려두기 보다는 900명의 유대인 수비수들이 자살했다.\tentailment\n앞으로 발사하라.\t발사.\tneutral\n그리고 당신은 우리 땅이 에이커에 있다는 것을 알고 있다. 우리 사람들은 어떤 것이 얼마나 많은지 이해하지 못할 것이다.\t모든 사람들은 우리의 측정 시스템이 어떻게 작동하는지 알고 이해합니다.\tcontradiction\n주미게스\tJumiyges는 도시의 이름이다.\tneutral\n사람은 자기 민족을 돌봐야 한다...\t사람은 조국에 공감해야 한다.\tentailment\n또한 PDD 63은 정부와 업계가 컴퓨터 기반 공격에 대해 경고하고 방어할 준비를 더 잘할 수 있도록 시스템 취약성, 위협, 침입 및 이상에 대한 정보를 공유하는 메커니즘을 수립하는 것이 중요하다는 것을 인식했습니다.\t정보 전송 프로토콜을 만드는 것은 중요하다.\tentailment\n카페 링 피아자 델라 레퓌블리카 바로 남쪽에는 피렌체가 알려진 짚 제품 때문에 한때 스트로 마켓이라고 불렸던 16세기 로지아인 메르카토 누오보(Mercato Nuovo)가 있다.\t피아자 델라 레퓌블리카에는 카페가 많이 있다.\tentailment\n우리가 여기 있는 한 트린판이 뭘 주웠는지 살펴봐야겠어\t우리는 트린판이 무엇을 주웠는지 보는 데 시간을 낭비하지 않을 것이다.\tcontradiction\n그러나 켈트족의 문화적 기반을 가진 아일랜드 교회는 유럽의 신흥 기독교 세계와는 다르게 발전했고 결국 로마와 중앙집권적 행정으로 대체되었다.\t아일랜드 교회에는 켈트족의 기지가 있었다.\tentailment\n글쎄, 넌 선택의 여지가 없어\t글쎄, 너에겐 많은 선택권이 있어.\tcontradiction\n사실, 공식적인 보장은 없다.\t내가 산 물건에 대한 보증이 없었다.\tneutral\n덜 활기차긴 하지만, 안시와 르 부르젯의 사랑스러운 호수에서도 삶은 똑같이 상쾌하다.\t안시와 르 부르겟에서는 호수에서의 활동이 서두르고 바쁜 분위기를 연출한다.\tcontradiction\n그의 여행 소식이 이미 퍼졌다면 공격 소식도 퍼졌을 테지만 마을에서는 전혀 공황의 기미가 보이지 않았다.\t그는 왜 마을이 당황하지 않았는지 알 수 없었다.\tneutral\n과거에는 죽음의 위협이 토지의 판매를 막는 데 거의 도움이 되지 않았다.\t토지 판매는 어떠한 위협도 교환하지 않고 이루어진다.\tcontradiction\n어느 시점에 이르러 나는 지금 다가오는 새로운 것들과 나오는 많은 새로운 것들이 내가 늙어가고 있다고 말하는 시대로 접어들고 있다.\t나는 여전히 내가 보는 모든 새로운 것을 사랑한다.\tcontradiction\n뉴스위크는 물리학자들이 경기장 행사에서 고속도로의 자동차 교통과 보행자 교통을 개선하기 위해 새떼의 움직임을 연구하고 있다고 말한다.\t고속도로의 자동차 교통 흐름을 개선하는 것은 물리학자들이 새떼를 연구하는 이유 중 하나이다.\tentailment\n얼마나 다른가? 그는 잠시 말을 멈추었다가 말을 이었다.\t그는 그 소녀가 어디에 있는지 알고 싶었다.\tentailment\n글쎄, 그에게 너무 많은 것을 주지마.\t그는 훨씬 더 많은 것을 요구할 것이다.\tneutral\n아무리 그의 창작물이 완벽해 보인다고 해도, 그들을 믿는 것은 아마도 좋은 생각이 아닐 것이다.\'\t도자기를 잘 만든다고 해서 누군가를 믿는 것은 아마 좋지 않을 것이다.\tneutral\n버스틀링 그란 비아(Bustling Gran Via)는 호텔, 상점, 극장, 나이트클럽, 카페 등이 어우러져 산책과 창가를 볼 수 있다.\tGran Via는 호텔, 상점, 극장, 나이트클럽, 카페의 번화한 조합이다.\tentailment\n정부 인쇄소\t그 사무실은 워싱턴에 위치해 있다.\tneutral\n실제 문화 전쟁이 어디 있는지 알고 싶다면 학원을 잊어버리고 실리콘 밸리와 레드몬드를 생각해 보라.\t실제 문화 전쟁은 레드몬드에서 일어난다.\tentailment\n그리고 페니실린을 주지 않기 위해 침대 위에 올려놨어\t그녀의 방에는 페니실린이 없다는 징후가 전혀 없었다.\tcontradiction\nL.A.의 야외 시장을 활보하는 것은 맛있고 저렴한 그루브를 잡고, 끝이 없는 햇빛을 즐기고, 신선한 농산물, 꽃, 향, 그리고 가젯 갈로어를 구입하면서 현지인들과 어울릴 수 있는 훌륭한 방법이다.\tLA의 야외 시장을 돌아다니는 것은 시간 낭비다.\tcontradiction\n안나는 밖으로 나와 안도의 한숨을 내쉬었다. 단 한 번, 그리고 마리후아쉬 맛의 술로 끝내자는 결심이 뒤섞여 있었다.\t안나는 안심하고 마리후아쉬 맛의 술을 다 마시기로 결심했다.\tentailment\n5 월에 Vajpayee는 핵 실험의 성공적인 완료를 발표했는데, 인도인들은 주권의 표시로 선전했지만 이웃 국가와 서구와의 인도 관계를 복잡하게 만들 수 있습니다.\t인도는 성공적인 핵실험을 한 적이 없다.\tcontradiction\n플라노 원에서 보통 얼마나 많은 것을 가지고 있는가?\t저 사람들 중에 플라노 원에 가본 사람 있어?\tcontradiction\n그것의 전체적인 형태의 우아함은 운하 건너편에서 가장 잘 볼 수 있다. 왜냐하면, 로마에 있는 성 베드로처럼, 돔은 길쭉한 본당 뒤로 더 가까운 곳에 사라지기 때문이다.\t성 베드로의 길쭉한 본당은 돔을 가린다.\tentailment\n당신은 수틴이 살에 강박적인 기쁨을 가지고 누드를 그릴 것이라고 생각하겠지만, 아니오; 그는 그의 모든 경력에서 단 한 점만을 그렸고, 그것은 사소한 그림이다.\t그는 그것이 그를 불편하게 만들었기 때문에 하나만 그렸다.\tneutral\n이 인상적인 풍경은 원래 나포 레온이 루브르 박물관의 침실에서 볼 수 있도록 계획되었는데, 그 당시 궁전이었습니다.\t나폴레옹은 그의 모든 궁전에 있는 그의 침실에서 보는 경치에 많은 관심을 가졌다.\tneutral\n그는 우리에게 문 열쇠를 건네주고는 급히 떠났다.\t그는 긴장해서 우리에게 열쇠를 빨리 주었다.\tneutral\n위원회는 또한 최종 규칙을 OMB에 제출했다.\t위원회는 또한 이 규칙을 다른 그룹에 제출했지만 최종 규칙은 OMB가 평가하기 위한 것이 었습니다.\tneutral\n정원가게에 가보면 올리비아의 복제 화합물 같은 유쾌한 이름을 가진 제품들을 찾을 수 있을 겁니다.이 제품이 뿌리를 내리도록 돕기 위해 촬영의 절단된 끝에 덩크슛을 하는 호르몬의 혼합물이죠.\t정원 가꾸기 가게의 제품들은 종종 그들의 목적을 설명하기 위해 기술적으로나 과학적으로 파생된 이름(올리비아의 복제 화합물처럼)을 부여받는다.\tneutral\n스타는 스틸 자신이나 왜 그녀의 이야기를 바꾸었는지에 훨씬 더 관심이 있을 것이다.\t스틸의 이야기는 조금도 변하지 않았다.\tcontradiction\n남편과의 마지막 대결로 맥티어는 노라의 변신을 너무나 능숙하게 예고해 왔기 때문에, 그녀에게는 당황스러울 정도로 갑작스러운 것처럼 보이지만, 우리에게는 감정적으로 불가피해 보인다.\t노라의 변신은 분명하고 필연적이었다.\tcontradiction\n이집트 최남단 도시인 아스완은 오랜 역사를 통해 중요한 역할을 해왔다.\t아스완은 이집트 국경 바로 위에 위치해 있습니다.\tneutral\n그러나 훨씬 더 우아한 건축적 터치는 신성한 춤인 Bharatanatyam에서 수행된 108 가지 기본 포즈를 시바 패널에서 볼 수 있습니다.\t패널에 대한 시바의 묘사는 일반적인 모티브다.\tneutral\n호화롭게 심어진 계단식 정원은 이탈리아 형식의 가장 훌륭한 앙상블 중 하나입니다.\t아름다운 정원과 희귀한 꽃꽂이 모두 이탈리아의 형식적인 스타일을 보여준다.\tneutral\n음, 그랬으면 좋았을 텐데\t나는 그것을 다르게 할 기회를 몹시 갈망한다.\tentailment\n폐허가 된 성의 기슭에 자리잡고 있는 예쁜 중세 도시 케이서스버그는 노벨 평화상 수상자 알버트 슈바이처(1875년)의 출생지로 널리 알려져 있다.\t알버트 슈바이처는 둘 다 케이서스버그 마을에 있었다.\tentailment\n고감도는 문제가 있는 대부분의 환자들이 발견될 것을 보장한다.\t장비 민감도는 문제 탐지와 관련이 없습니다.\tcontradiction\n오늘은 확실히 반바지 같은 날이었어\t오늘 사무실에 있는 모든 사람들은 반바지를 입었다.\tneutral\n못생긴 턱시도를 입고.\t그것은 분홍색과 주황색입니다.\tneutral\n이주 노동 수용소 오 마이 갓 그들은 판지 상자에 산다.\t노동 수용소에는 판지 상자에 사는 이주 노동자들의 사진이 있다.\tneutral\n그래, 그가 전 세계를 여행한 후에 그런 거야\t그것은 사람들의 세계 여행을 따른다.\tentailment\n건너편에 크고 큰 참나무 몇 그루가 있다.\t우리는 여기 오크나 어떤 종류의 미국 나무도 없다.\tcontradiction\nFort-de-France에서 출발하는 자동차나 여객선으로, 당신은 안세 ? 바다 포도가 그늘을 제공하는 쾌적한 갈색 모래 해변과 피크닉 테이블, 어린이 미끄럼틀, 식당이 있는 안느에 도착할 수 있다.\t프랑스 요새에서 자동차나 페리를 타고 안세로 갈 수 있다.\tentailment\n그리고 그것은 앨라배마주가 예상했던 대로 예산에서 50만 달러를 삭감하지 않을 것이라는 것을 의미한다.\t앨라배마 주는 예산 삭감을 하지 않았다. 왜냐하면 그렇게 하는 것에 대한 초기 정당성이 정밀 조사에 맞서지 않았기 때문이다.\tneutral\n알았어 먼저 어 .. 어 .. 노인이나 가족을 요양원에 보내는 것에 대해 어떻게 생각하니?\t가족을 요양원에 보내서 사는 것에 대해 어떻게 생각하는지 알 필요가 없다.\tcontradiction\n나머지는 너에게 달렸어.\t나머지는 너에게 달렸지만 시간이 많지 않다.\tneutral\n음-흠, 3월에 햇볕에 타는 것에 대해 걱정하면 안 된다는 것을 알고 있는 3월이야.\t3월은 그렇게 덥지 않다.\tneutral\n그리고 어, 그런 작은 것들로 다시 시작해봐. 아직 훨씬 싸. 어, 그 특별한 모델 차는 150달러야.\t그 모형차는 4천 달러가 든다.\tcontradiction\n내일 돌아가야 한다면, 칼이 말했다.\t돌아갈 수 없어. 오늘은 안 돼. 내일은 안 돼. 절대 안 돼." 칼이 말했다.', 'sentence2': 'contradiction'} ``` 2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in 🤗 Transformers - `kor_nli` dataset has same data structure of multi_nli, xnli - Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "premise": datasets.Value("string"), "hypothesis": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]), } ), ``` If you don't mind, I would like to fix this. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/821/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
https://api.github.com/repos/huggingface/datasets/issues/3499/events
https://github.com/huggingface/datasets/issues/3499
1,090,132,618
I_kwDODunzps5A-hqK
3,499
Adjusting chunk size for streaming datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
2
2021-12-28T21:17:53Z
2022-05-06T16:29:05Z
2022-05-06T16:29:05Z
null
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
null
completed
null
null
false
[ "Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to inc...
https://api.github.com/repos/huggingface/datasets/issues/4943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4943/comments
https://api.github.com/repos/huggingface/datasets/issues/4943/events
https://github.com/huggingface/datasets/pull/4943
1,363,967,650
PR_kwDODunzps4-eZd_
4,943
Add splits to MBPP dataset
[]
closed
false
null
4
2022-09-07T01:18:31Z
2022-09-13T12:29:19Z
2022-09-13T12:27:21Z
null
This PR addresses https://github.com/huggingface/datasets/issues/4795
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4943/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4943/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4943.diff", "html_url": "https://github.com/huggingface/datasets/pull/4943", "merged_at": "2022-09-13T12:27:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/4943.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4943" }
true
[ "```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts ==========================================================...
https://api.github.com/repos/huggingface/datasets/issues/4430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4430/comments
https://api.github.com/repos/huggingface/datasets/issues/4430/events
https://github.com/huggingface/datasets/issues/4430
1,254,412,591
I_kwDODunzps5KxNEv
4,430
Add ability to load newer, cleaner version of Multi-News
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
6
2022-05-31T21:00:44Z
2022-06-07T17:14:44Z
2022-06-07T17:14:44Z
null
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq). Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility. **Describe the solution you'd like** Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues. **Describe alternatives you've considered** Replace the current URL to the original version to the dataset with the URL to the version with fixes. **Additional context** Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4430/timeline
null
completed
null
null
false
[ "Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.", "@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a z...
https://api.github.com/repos/huggingface/datasets/issues/2890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2890/comments
https://api.github.com/repos/huggingface/datasets/issues/2890/events
https://github.com/huggingface/datasets/issues/2890
993,074,102
MDU6SXNzdWU5OTMwNzQxMDI=
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
0
2021-09-10T09:51:17Z
2021-09-10T11:45:29Z
2021-09-10T11:45:29Z
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2890/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2890/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1022/comments
https://api.github.com/repos/huggingface/datasets/issues/1022/events
https://github.com/huggingface/datasets/pull/1022
755,651,377
MDExOlB1bGxSZXF1ZXN0NTMxMzIzNTkw
1,022
add MRQA
[]
closed
false
null
1
2020-12-02T22:17:56Z
2020-12-04T00:34:26Z
2020-12-04T00:34:25Z
null
MRQA (shared task 2019) out of distribution generalization Framed as extractive question answering Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1022/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1022/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1022.diff", "html_url": "https://github.com/huggingface/datasets/pull/1022", "merged_at": "2020-12-04T00:34:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1022.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1022" }
true
[ "THanks!\r\nDone!" ]
https://api.github.com/repos/huggingface/datasets/issues/2788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2788/comments
https://api.github.com/repos/huggingface/datasets/issues/2788/events
https://github.com/huggingface/datasets/issues/2788
967,149,389
MDU6SXNzdWU5NjcxNDkzODk=
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
[]
closed
false
null
1
2021-08-11T17:43:21Z
2023-07-25T17:40:50Z
2023-07-25T17:40:50Z
null
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=['train[:8]', 'test[:8]', 'val[:8]'] ) ``` However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists. I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split. Is this type of splitting supported? If so, how can I do it?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2788/timeline
null
completed
null
null
false
[ "Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\...
https://api.github.com/repos/huggingface/datasets/issues/4200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4200/comments
https://api.github.com/repos/huggingface/datasets/issues/4200/events
https://github.com/huggingface/datasets/pull/4200
1,211,980,110
PR_kwDODunzps42mz0w
4,200
Add to docs how to load from local script
[]
closed
false
null
1
2022-04-22T08:08:25Z
2022-05-06T08:39:25Z
2022-04-23T05:47:25Z
null
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4200/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4200/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4200.diff", "html_url": "https://github.com/huggingface/datasets/pull/4200", "merged_at": "2022-04-23T05:47:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4200.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4200" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5064/comments
https://api.github.com/repos/huggingface/datasets/issues/5064/events
https://github.com/huggingface/datasets/pull/5064
1,395,978,143
PR_kwDODunzps5AHsP0
5,064
Align signature of create/delete_repo with latest hfh
[]
closed
false
null
1
2022-10-04T09:54:53Z
2022-10-07T17:02:11Z
2022-10-07T16:59:30Z
null
This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead. Related to: - #5063 CC: @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5064/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5064/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5064.diff", "html_url": "https://github.com/huggingface/datasets/pull/5064", "merged_at": "2022-10-07T16:59:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5064.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5064" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/186/comments
https://api.github.com/repos/huggingface/datasets/issues/186/events
https://github.com/huggingface/datasets/issues/186
623,595,180
MDU6SXNzdWU2MjM1OTUxODA=
186
Weird-ish: Not creating unique caches for different phases
[]
closed
false
null
2
2020-05-23T06:40:58Z
2020-05-23T20:22:18Z
2020-05-23T20:22:17Z
null
Sample code: ```python import nlp dataset = nlp.load_dataset('boolq') def func1(x): return x def func2(x): return None train_output = dataset["train"].map(func1) valid_output = dataset["validation"].map(func1) print() print(len(train_output), len(valid_output)) # Output: 9427 9427 ``` The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache. What's weird is that the following doesn't seem to be an issue: ```python train_output = dataset["train"].map(func2) valid_output = dataset["validation"].map(func2) print() print(len(train_output), len(valid_output)) # 9427 3270 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/186/timeline
null
completed
null
null
false
[ "Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon", "Good catch, it looks fixed.\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/2578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2578/comments
https://api.github.com/repos/huggingface/datasets/issues/2578/events
https://github.com/huggingface/datasets/pull/2578
935,187,497
MDExOlB1bGxSZXF1ZXN0NjgyMTQ0OTY2
2,578
Support Zstandard compressed files
[]
closed
false
null
8
2021-07-01T20:22:34Z
2021-08-11T14:46:24Z
2021-07-05T10:50:27Z
null
Close #2572. cc: @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2578/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2578/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2578.diff", "html_url": "https://github.com/huggingface/datasets/pull/2578", "merged_at": "2021-07-05T10:50:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/2578.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2578" }
true
[ "> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users...
https://api.github.com/repos/huggingface/datasets/issues/1849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
https://api.github.com/repos/huggingface/datasets/issues/1849/events
https://github.com/huggingface/datasets/issues/1849
804,292,971
MDU6SXNzdWU4MDQyOTI5NzE=
1,849
Add TIMIT
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b",...
closed
false
null
3
2021-02-09T07:29:41Z
2021-03-15T05:59:37Z
2021-03-15T05:59:37Z
null
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
null
completed
null
null
false
[ "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n", "Hey @vrindaprabhu - sure I'...
https://api.github.com/repos/huggingface/datasets/issues/327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/327/comments
https://api.github.com/repos/huggingface/datasets/issues/327/events
https://github.com/huggingface/datasets/pull/327
648,312,858
MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw
327
set seed for suffling tests
[]
closed
false
null
0
2020-06-30T16:21:34Z
2020-07-02T08:34:05Z
2020-07-02T08:34:04Z
null
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/327/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/327/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/327.diff", "html_url": "https://github.com/huggingface/datasets/pull/327", "merged_at": "2020-07-02T08:34:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/327.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/327" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3933/comments
https://api.github.com/repos/huggingface/datasets/issues/3933/events
https://github.com/huggingface/datasets/pull/3933
1,170,253,605
PR_kwDODunzps40flNM
3,933
Update README.md
[]
closed
false
null
1
2022-03-15T20:52:05Z
2022-03-17T17:51:24Z
2022-03-17T17:47:37Z
null
Fixing missing triple quote
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3933/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3933.diff", "html_url": "https://github.com/huggingface/datasets/pull/3933", "merged_at": "2022-03-17T17:47:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3933.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3933" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5296/comments
https://api.github.com/repos/huggingface/datasets/issues/5296/events
https://github.com/huggingface/datasets/issues/5296
1,464,553,580
I_kwDODunzps5XS1Bs
5,296
Bug in xjoin with Windows pathnames
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2022-11-25T13:29:33Z
2022-11-29T08:05:13Z
2022-11-29T08:05:13Z
null
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ```python "C:\\Users\\USERNAME\\filename.txt" ``` However it is: ```python "C:/Users/USERNAME/filename.txt" ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5296/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1294/comments
https://api.github.com/repos/huggingface/datasets/issues/1294/events
https://github.com/huggingface/datasets/pull/1294
759,365,246
MDExOlB1bGxSZXF1ZXN0NTM0MzgzMjg5
1,294
adding opus_euconst
[]
closed
false
null
0
2020-12-08T11:24:16Z
2020-12-08T18:44:20Z
2020-12-08T18:41:23Z
null
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1294/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1294/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1294.diff", "html_url": "https://github.com/huggingface/datasets/pull/1294", "merged_at": "2020-12-08T18:41:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1294.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1294" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/872/comments
https://api.github.com/repos/huggingface/datasets/issues/872/events
https://github.com/huggingface/datasets/pull/872
747,653,697
MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx
872
Add IndicGLUE dataset and Metrics
[]
closed
false
null
1
2020-11-20T17:09:34Z
2020-11-25T17:01:11Z
2020-11-25T15:26:07Z
null
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/872/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/872.diff", "html_url": "https://github.com/huggingface/datasets/pull/872", "merged_at": "2020-11-25T15:26:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/872.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/872" }
true
[ "thanks ! merging now" ]
https://api.github.com/repos/huggingface/datasets/issues/1734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1734/comments
https://api.github.com/repos/huggingface/datasets/issues/1734/events
https://github.com/huggingface/datasets/pull/1734
784,956,707
MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz
1,734
Fix empty token bug for `thainer` and `lst20`
[]
closed
false
null
0
2021-01-13T09:55:09Z
2021-01-14T10:42:18Z
2021-01-14T10:42:18Z
null
add a condition to check if tokens exist before yielding in `thainer` and `lst20`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1734/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1734.diff", "html_url": "https://github.com/huggingface/datasets/pull/1734", "merged_at": "2021-01-14T10:42:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1734" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1951/comments
https://api.github.com/repos/huggingface/datasets/issues/1951/events
https://github.com/huggingface/datasets/pull/1951
817,423,573
MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2
1,951
Add cross-platform support for datasets-cli
[]
closed
false
null
1
2021-02-26T14:56:25Z
2021-03-11T02:18:26Z
2021-02-26T15:30:26Z
null
One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src/datasets/commands/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https://github.com/huggingface/transformers/pull/4131)).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1951/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1951/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1951.diff", "html_url": "https://github.com/huggingface/datasets/pull/1951", "merged_at": "2021-02-26T15:30:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1951.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1951" }
true
[ "@mariosasko This is kinda cool! " ]
https://api.github.com/repos/huggingface/datasets/issues/631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/631/comments
https://api.github.com/repos/huggingface/datasets/issues/631/events
https://github.com/huggingface/datasets/pull/631
701,711,255
MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0
631
Fix text delimiter
[]
closed
false
null
5
2020-09-15T08:08:42Z
2020-09-22T15:03:06Z
2020-09-15T08:26:25Z
null
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/631/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/631.diff", "html_url": "https://github.com/huggingface/datasets/pull/631", "merged_at": "2020-09-15T08:26:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/631" }
true
[ "Which OS are you using ?@abhi1nandy2", "> Which OS are you using ?\r\n\r\nPRETTY_NAME=\"Debian GNU/Linux 9 (stretch)\"\r\nNAME=\"Debian GNU/Linux\"\r\nVERSION_ID=\"9\"\r\nVERSION=\"9 (stretch)\"\r\nVERSION_CODENAME=stretch\r\nID=debian\r\nHOME_URL=\"https://www.debian.org/\"\r\nSUPPORT_URL=\"https://www.debian.o...
https://api.github.com/repos/huggingface/datasets/issues/3374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3374/comments
https://api.github.com/repos/huggingface/datasets/issues/3374/events
https://github.com/huggingface/datasets/issues/3374
1,070,426,462
I_kwDODunzps4_zWle
3,374
NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews
[]
closed
false
null
2
2021-12-03T10:10:54Z
2021-12-08T14:14:41Z
2021-12-08T14:14:41Z
null
Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3374/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3374/timeline
null
completed
null
null
false
[ "Seems like the issue still exists,:\r\n`Downloading and preparing dataset clue/chid (download: 127.15 MiB, generated: 259.71 MiB, post-processed: Unknown size, total: 386.86 MiB) to /mnt/cache/tanhaochen/.cache/huggingface/datasets/clue/chid/1.0.0/e55b490cb7809dcd8db31b9a87119f2e2ec87cdc060da8a9ac070b070ca3e379......
https://api.github.com/repos/huggingface/datasets/issues/920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/920/comments
https://api.github.com/repos/huggingface/datasets/issues/920/events
https://github.com/huggingface/datasets/pull/920
753,445,747
MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz
920
add dream dataset
[]
closed
false
null
6
2020-11-30T12:40:14Z
2020-12-03T16:45:12Z
2020-12-02T15:39:12Z
null
Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension More details: https://dataset.org/dream/ https://github.com/nlpdata/dream
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/920/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/920.diff", "html_url": "https://github.com/huggingface/datasets/pull/920", "merged_at": "2020-12-02T15:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/920.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/920" }
true
[ "> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can'...
https://api.github.com/repos/huggingface/datasets/issues/5995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5995/comments
https://api.github.com/repos/huggingface/datasets/issues/5995/events
https://github.com/huggingface/datasets/pull/5995
1,777,088,925
PR_kwDODunzps5UCvYJ
5,995
Support returning dataframe in map transform
[]
closed
false
null
4
2023-06-27T14:15:08Z
2023-06-28T13:56:02Z
2023-06-28T13:46:33Z
null
Allow returning Pandas DataFrames in `map` transforms. (Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5995/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5995.diff", "html_url": "https://github.com/huggingface/datasets/pull/5995", "merged_at": "2023-06-28T13:46:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/5995.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5995" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/2153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
https://api.github.com/repos/huggingface/datasets/issues/2153/events
https://github.com/huggingface/datasets/issues/2153
846,181,502
MDU6SXNzdWU4NDYxODE1MDI=
2,153
load_dataset ignoring features
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2021-03-31T08:30:09Z
2022-10-05T13:29:12Z
2022-10-05T13:29:12Z
null
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
null
completed
null
null
false
[ "Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201", "Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.", "Hi :) We're indeed working on tutorials that we will add to the docs...
https://api.github.com/repos/huggingface/datasets/issues/4451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4451/comments
https://api.github.com/repos/huggingface/datasets/issues/4451/events
https://github.com/huggingface/datasets/pull/4451
1,262,103,323
PR_kwDODunzps45LkGc
4,451
Use newer version of multi-news with fixes
[]
closed
false
null
2
2022-06-06T16:57:08Z
2022-06-07T17:40:01Z
2022-06-07T17:14:44Z
null
Closes #4430.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4451/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4451/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4451.diff", "html_url": "https://github.com/huggingface/datasets/pull/4451", "merged_at": "2022-06-07T17:14:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4451.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4451" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Awesome thanks @mariosasko!" ]
https://api.github.com/repos/huggingface/datasets/issues/1851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
https://api.github.com/repos/huggingface/datasets/issues/1851/events
https://github.com/huggingface/datasets/pull/1851
804,523,174
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
1,851
set bert_score version dependency
[]
closed
false
null
0
2021-02-09T12:51:07Z
2021-02-09T14:21:48Z
2021-02-09T14:21:48Z
null
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "html_url": "https://github.com/huggingface/datasets/pull/1851", "merged_at": "2021-02-09T14:21:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/6016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6016/comments
https://api.github.com/repos/huggingface/datasets/issues/6016/events
https://github.com/huggingface/datasets/pull/6016
1,798,968,033
PR_kwDODunzps5VNEvn
6,016
Dataset string representation enhancement
[]
open
false
null
2
2023-07-11T13:38:25Z
2023-07-16T10:26:18Z
null
null
my attempt at #6010 not sure if this is the right way to go about it, I will wait for your feedback
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6016/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6016/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6016.diff", "html_url": "https://github.com/huggingface/datasets/pull/6016", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6016.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6016" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6016). All of your documentation changes will be reflected on that endpoint.", "It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`/`__str__` :\r\n```\r\nshape...
https://api.github.com/repos/huggingface/datasets/issues/2498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2498/comments
https://api.github.com/repos/huggingface/datasets/issues/2498/events
https://github.com/huggingface/datasets/issues/2498
920,411,285
MDU6SXNzdWU5MjA0MTEyODU=
2,498
Improve torch formatting performance
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
17
2021-06-14T13:25:24Z
2022-07-15T17:12:04Z
null
null
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs. The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded. **Describe the solution you'd like** Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call. ![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png) As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call. Digging a bit deeper into format_batch we can see the following profiler data: ![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png) Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion. **Describe alternatives you've considered** I am not familiar with pyarrow and have not yet considered the alternatives to the current approach. Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2498/timeline
null
null
null
null
false
[ "That’s interesting thanks, let’s see what we can do. Can you detail your last sentence? I’m not sure I understand it well.", "Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:\r\n\r\n```python\r\nimport pyarrow as pa # I used pyarrow 3.0.0\r\nimport numpy as np\r\n\r\nn, max_le...
https://api.github.com/repos/huggingface/datasets/issues/3105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3105/comments
https://api.github.com/repos/huggingface/datasets/issues/3105/events
https://github.com/huggingface/datasets/issues/3105
1,029,098,843
I_kwDODunzps49Vs1b
3,105
download_mode=`force_redownload` does not work on removed datasets
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "E5583E", "default": false, "descrip...
open
false
null
0
2021-10-18T13:12:38Z
2021-10-22T09:36:10Z
null
null
## Describe the bug If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead. ## Steps to reproduce the bug _requires to already have `wit` in the cache_: see https://github.com/huggingface/datasets/pull/2981 ```python import datasets as ds dataset = ds.load_dataset("wit", split="train", download_mode='force_redownload') ``` ## Expected results It should raise an exception, since the dataset does not exist anymore. ## Actual results It uses the cached result ``` Using the latest cached version of the module from /home/slesage/.cache/huggingface/modules/datasets_modules/datasets/wit/107afbffd48e058b19101bddc47fbee25fa68eb6d50a733e262875f1285a5171 (last modified on Wed Sep 29 08:21:10 2021) since it couldn't be found locally at wit, or remotely on the Hugging Face Hub. ``` ## Environment info - `datasets` version: 1.13.4.dev0 - Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3105/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3105/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3268/comments
https://api.github.com/repos/huggingface/datasets/issues/3268/events
https://github.com/huggingface/datasets/issues/3268
1,052,992,681
I_kwDODunzps4-w2Sp
3,268
Dataset viewer issue for 'liweili/c4_200m'
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
5
2021-11-14T17:18:46Z
2021-12-21T10:25:20Z
2021-12-21T10:24:51Z
null
## Dataset viewer issue for '*liweili/c4_200m*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)* *Server Error* ``` Status code: 404 Exception: Status404Error Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist. ``` Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3268/timeline
null
completed
null
null
false
[ "Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nL...
https://api.github.com/repos/huggingface/datasets/issues/6048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6048/comments
https://api.github.com/repos/huggingface/datasets/issues/6048/events
https://github.com/huggingface/datasets/issues/6048
1,809,629,346
I_kwDODunzps5r3MCi
6,048
when i use datasets.load_dataset, i encounter the http connect error!
[]
closed
false
null
1
2023-07-18T10:16:34Z
2023-07-18T16:18:39Z
2023-07-18T16:18:39Z
null
### Describe the bug `common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)` when i run the code above, i got the error as below: -------------------------------------------- ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) -------------------------------------------------- My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet. ### Steps to reproduce the bug 1 ### Expected behavior no error when i use the load_dataset func ### Environment info python=3.8.15
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6048/timeline
null
completed
null
null
false
[ "The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work." ]
https://api.github.com/repos/huggingface/datasets/issues/918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/918/comments
https://api.github.com/repos/huggingface/datasets/issues/918/events
https://github.com/huggingface/datasets/pull/918
753,397,440
MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4
918
Add conll2002
[]
closed
false
null
0
2020-11-30T11:29:35Z
2020-11-30T18:34:30Z
2020-11-30T18:34:29Z
null
Adding the Conll2002 dataset for NER. More info here : https://www.clips.uantwerpen.be/conll2002/ner/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/918/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/918.diff", "html_url": "https://github.com/huggingface/datasets/pull/918", "merged_at": "2020-11-30T18:34:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/918.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/918" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4791/comments
https://api.github.com/repos/huggingface/datasets/issues/4791/events
https://github.com/huggingface/datasets/issues/4791
1,328,571,064
I_kwDODunzps5PMGK4
4,791
Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
1
2022-08-04T12:49:16Z
2022-08-04T13:43:16Z
2022-08-04T13:43:16Z
null
### Link https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train ### Description The dataset can be loaded fine but the viewer shows this error: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` I'm guessing this is because I recently renamed the dataset. Based on related issues (e.g. https://github.com/huggingface/datasets/issues/4759) , is there something server-side that needs to be refreshed? ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4791/timeline
null
completed
null
null
false
[ "Thanks for reporting. It's a known issue that should be fixed soon. Meanwhile, I had to manually trigger the dataset viewer. It's OK now.\r\nNote that the extreme aspect ratio of the images generates another issue, that we're inspecting." ]
https://api.github.com/repos/huggingface/datasets/issues/1713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1713/comments
https://api.github.com/repos/huggingface/datasets/issues/1713/events
https://github.com/huggingface/datasets/issues/1713
782,337,723
MDU6SXNzdWU3ODIzMzc3MjM=
1,713
Installation using conda
[]
closed
false
null
5
2021-01-08T19:12:15Z
2021-09-17T12:47:40Z
2021-09-17T12:47:40Z
null
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and using pip in a conda environment is generally a bad idea in my experience.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1713/timeline
null
completed
null
null
false
[ "Yes indeed the idea is to have the next release on conda cc @LysandreJik ", "Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.", "I think we can have `datasets` on conda by next week. Will see what I can do!", "Thank you. Lo...
https://api.github.com/repos/huggingface/datasets/issues/2499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2499/comments
https://api.github.com/repos/huggingface/datasets/issues/2499/events
https://github.com/huggingface/datasets/issues/2499
920,413,021
MDU6SXNzdWU5MjA0MTMwMjE=
2,499
Python Programming Puzzles
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
2
2021-06-14T13:27:18Z
2021-06-15T18:14:14Z
null
null
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md)) - **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs. Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2499/timeline
null
null
null
null
false
[ "👀 @TalSchuster", "Thanks @VictorSanh!\r\nThere's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles" ]
https://api.github.com/repos/huggingface/datasets/issues/3187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3187/comments
https://api.github.com/repos/huggingface/datasets/issues/3187/events
https://github.com/huggingface/datasets/pull/3187
1,040,412,869
PR_kwDODunzps4t44Ab
3,187
Add ChrF(++) (as implemented in sacrebleu)
[]
closed
false
null
0
2021-10-31T08:53:58Z
2021-11-02T14:50:50Z
2021-11-02T14:31:26Z
null
Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself ```python import datasets EPSILON = 1e-4 chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf") test_cases = [ (["abcdefg"], ["hijklmnop"], 0.0), (["a"], ["b"], 0.0), ([""], ["b"], 0.0), ([""], ["ref"], 0.0), ([""], ["reference"], 0.0), (["aa"], ["ab"], 8.3333), (["a", "b"], ["a", "c"], 8.3333), (["a"], ["a"], 16.6667), (["a b c"], ["a b c"], 50.0), (["a b c"], ["abc"], 50.0), ([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730), ([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "], ["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698), (["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0), ] for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_effective_order = [ (["a"], ["a"], 100.0), ([""], ["reference"], 0.0), (["a b c"], ["a b c"], 100.0), (["a b c"], ["abc"], 100.0), ([""], ["c"], 0.0), (["a", "b"], ["a", "c"], 50.0), (["aa"], ["ab"], 25.0), ] for hyp, ref, score in test_cases_effective_order: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=False) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_keep_whitespace = [ ( ["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."], ["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 67.3481606, ), ( ["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 65.2414427, ), ] for hyp, ref, score in test_cases_keep_whitespace: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, whitespace=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."] references = [["The ties between Obama and Netanyahu are not particularly friendly."]] print(chrf.compute(predictions=predictions, references=references)) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3187/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3187/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3187.diff", "html_url": "https://github.com/huggingface/datasets/pull/3187", "merged_at": "2021-11-02T14:31:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3187.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3187" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4662/comments
https://api.github.com/repos/huggingface/datasets/issues/4662/events
https://github.com/huggingface/datasets/pull/4662
1,298,845,369
PR_kwDODunzps47GTEc
4,662
Fix: conll2003 - fix empty example
[]
closed
false
null
1
2022-07-08T10:49:13Z
2022-07-08T14:14:53Z
2022-07-08T14:02:42Z
null
As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4662/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4662.diff", "html_url": "https://github.com/huggingface/datasets/pull/4662", "merged_at": "2022-07-08T14:02:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/4662.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4662" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4156/comments
https://api.github.com/repos/huggingface/datasets/issues/4156/events
https://github.com/huggingface/datasets/pull/4156
1,202,220,531
PR_kwDODunzps42HySw
4,156
Adding STSb-TR dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
1
2022-04-12T18:10:05Z
2022-10-03T09:36:25Z
2022-10-03T09:36:25Z
null
Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4156/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4156/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4156.diff", "html_url": "https://github.com/huggingface/datasets/pull/4156", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4156.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4156" }
true
[ "Thanks for your contribution, @figenfikri.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
[]
closed
false
null
9
2021-03-08T07:38:11Z
2021-03-09T17:58:13Z
2021-03-09T17:58:13Z
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
completed
null
null
false
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with ba...
https://api.github.com/repos/huggingface/datasets/issues/4236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4236/comments
https://api.github.com/repos/huggingface/datasets/issues/4236/events
https://github.com/huggingface/datasets/pull/4236
1,217,115,691
PR_kwDODunzps423MOc
4,236
Replace data URL in big_patent dataset and support streaming
[]
closed
false
null
5
2022-04-27T10:01:13Z
2022-06-10T08:10:55Z
2022-05-02T18:21:15Z
null
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4236/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4236.diff", "html_url": "https://github.com/huggingface/datasets/pull/4236", "merged_at": "2022-05-02T18:21:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4236.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4236" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred...
https://api.github.com/repos/huggingface/datasets/issues/3679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3679/comments
https://api.github.com/repos/huggingface/datasets/issues/3679/events
https://github.com/huggingface/datasets/issues/3679
1,124,062,133
I_kwDODunzps5C_9O1
3,679
Download datasets from a private hub
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "A929D8", "default": fals...
closed
false
null
3
2022-02-04T10:49:06Z
2022-02-22T11:08:07Z
2022-02-22T11:08:07Z
null
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3679/timeline
null
completed
null
null
false
[ "For reference:\r\nhttps://github.com/huggingface/transformers/issues/15514\r\nhttps://github.com/huggingface/huggingface_hub/issues/650", "Hi ! For information one can set the environment variable `HF_ENDPOINT` (default is `https://huggingface.co`) if they want to use a private hub.\r\n\r\nWe may need to coordin...
https://api.github.com/repos/huggingface/datasets/issues/3066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3066/comments
https://api.github.com/repos/huggingface/datasets/issues/3066/events
https://github.com/huggingface/datasets/pull/3066
1,024,005,311
PR_kwDODunzps4tFObl
3,066
Add iter_archive
[]
closed
false
null
0
2021-10-12T16:17:16Z
2022-09-21T14:10:10Z
2021-10-18T09:12:46Z
null
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio dataset using TAR archives can be updated to use `iter_archive` in order to be streamable :) cc @severo Fix #2829.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3066/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3066/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3066.diff", "html_url": "https://github.com/huggingface/datasets/pull/3066", "merged_at": "2021-10-18T09:12:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3066.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3066" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/280/comments
https://api.github.com/repos/huggingface/datasets/issues/280/events
https://github.com/huggingface/datasets/issues/280
640,677,615
MDU6SXNzdWU2NDA2Nzc2MTU=
280
Error with SquadV2 Metrics
[]
closed
false
null
0
2020-06-17T19:10:54Z
2020-06-19T08:33:41Z
2020-06-19T08:33:41Z
null
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs) 426 """ 427 module_path = prepare_module(path, download_config=download_config, dataset=False) --> 428 metric_cls = import_main_class(module_path, dataset=False) 429 metric = metric_cls( 430 name=name, ~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset) 55 """ 56 importlib.invalidate_caches() ---> 57 module = importlib.import_module(module_path) 58 59 if dataset: /usr/lib64/python3.6/importlib/__init__.py in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module> 16 17 import nlp ---> 18 from .evaluate import evaluate 19 20 _CITATION = """\ ImportError: cannot import name 'evaluate' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/280/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5195/comments
https://api.github.com/repos/huggingface/datasets/issues/5195/events
https://github.com/huggingface/datasets/pull/5195
1,434,290,689
PR_kwDODunzps5CHhF2
5,195
[wip testing docs]
[]
closed
false
null
1
2022-11-03T08:37:34Z
2023-04-04T15:10:37Z
2023-04-04T15:10:33Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5195/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5195/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5195.diff", "html_url": "https://github.com/huggingface/datasets/pull/5195", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5195.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5195" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5195). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/4110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4110/comments
https://api.github.com/repos/huggingface/datasets/issues/4110/events
https://github.com/huggingface/datasets/pull/4110
1,194,581,375
PR_kwDODunzps41u4Je
4,110
Matthews Correlation Metric Card
[]
closed
false
null
1
2022-04-06T12:59:35Z
2022-05-03T13:43:17Z
2022-05-03T13:36:13Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4110/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4110/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4110.diff", "html_url": "https://github.com/huggingface/datasets/pull/4110", "merged_at": "2022-05-03T13:36:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/4110.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4110" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4394/comments
https://api.github.com/repos/huggingface/datasets/issues/4394/events
https://github.com/huggingface/datasets/issues/4394
1,245,221,657
I_kwDODunzps5KOJMZ
4,394
trainer became extremely slow after reload dataset by `load_from_disk`
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
4
2022-05-23T14:04:37Z
2022-06-06T16:08:01Z
null
null
## Describe the bug Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card. Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)** ## Steps to reproduce the bug ```python tokenized_datasets.save_to_disk( "/pathto/dataset" ) tokenized_datasets = load_from_disk( "/pathto/dataset" ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` ## Expected results Without the save and reload process, I only need about one day to run the whole script with one A100 card. ## Actual results ``` [INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training ***** [INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165 [INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5 [INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16 [INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540 0%| | 1/567540 [00:09<1544:49:04, 9.80s/it] 0%| | 2/567540 [00:17<1320:00:17, 8.37s/it] 0%| | 3/567540 [00:26<1393:10:17, 8.84s/it] 0%| | 4/567540 [00:34<1344:56:33, 8.53s/it] 0%| | 5/567540 [00:43<1359:36:12, 8.62s/it] ``` ## Environment info ``` torch 1.11.0+cu113 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.18.0 datasets 2.2.2 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4394/timeline
null
null
null
null
false
[ "I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!", "Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `tra...
https://api.github.com/repos/huggingface/datasets/issues/2289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2289/comments
https://api.github.com/repos/huggingface/datasets/issues/2289/events
https://github.com/huggingface/datasets/pull/2289
871,118,573
MDExOlB1bGxSZXF1ZXN0NjI2MTg5MDU3
2,289
Allow collaborators to self-assign issues
[]
closed
false
null
2
2021-04-29T15:07:06Z
2021-04-30T18:28:16Z
2021-04-30T18:28:16Z
null
Allow collaborators (without write access to the repository) to self-assign issues. In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2289/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2289.diff", "html_url": "https://github.com/huggingface/datasets/pull/2289", "merged_at": "2021-04-30T18:28:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2289" }
true
[ "What do you think, @lhoestq? 😉 \r\n\r\nI think this could be another step to facilitate community contributions.", "@lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...\r\n\r\nAnd sure, this must be documented! I just wanted first to know...
https://api.github.com/repos/huggingface/datasets/issues/4643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4643/comments
https://api.github.com/repos/huggingface/datasets/issues/4643/events
https://github.com/huggingface/datasets/pull/4643
1,295,852,650
PR_kwDODunzps468Cqk
4,643
Rename master to main
[]
closed
false
null
3
2022-07-06T13:34:30Z
2022-07-06T15:36:46Z
2022-07-06T15:25:08Z
null
This PR renames mentions of "master" by "main" in the code base for several cases: - set the default dataset script version to "main" if the local installation of `datasets` is a dev installation - update URLs to this github repository to use "main" - update the DVC benchmark - update the github workflows - update docstrings - update tests to compare the changes in dataset cards against "main"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4643/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4643.diff", "html_url": "https://github.com/huggingface/datasets/pull/4643", "merged_at": "2022-07-06T15:25:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/4643.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4643" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https://huggi...
https://api.github.com/repos/huggingface/datasets/issues/1162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1162/comments
https://api.github.com/repos/huggingface/datasets/issues/1162/events
https://github.com/huggingface/datasets/pull/1162
757,707,085
MDExOlB1bGxSZXF1ZXN0NTMzMDM1MzEw
1,162
Add Mocha dataset
[]
closed
false
null
0
2020-12-05T15:45:14Z
2020-12-07T10:09:39Z
2020-12-07T10:09:39Z
null
More information: https://allennlp.org/mocha
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1162/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1162/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1162.diff", "html_url": "https://github.com/huggingface/datasets/pull/1162", "merged_at": "2020-12-07T10:09:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1162.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1162" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4908/comments
https://api.github.com/repos/huggingface/datasets/issues/4908/events
https://github.com/huggingface/datasets/pull/4908
1,353,995,574
PR_kwDODunzps499FDS
4,908
Fix missing tags in dataset cards
[]
closed
false
null
1
2022-08-29T09:41:53Z
2022-09-22T14:35:56Z
2022-08-29T16:13:07Z
null
Fix missing tags in dataset cards: - asnq - clue - common_gen - cosmos_qa - guardian_authorship - hindi_discourse - py_ast - x_stance This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4908/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4908.diff", "html_url": "https://github.com/huggingface/datasets/pull/4908", "merged_at": "2022-08-29T16:13:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4908.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4908" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4496/comments
https://api.github.com/repos/huggingface/datasets/issues/4496/events
https://github.com/huggingface/datasets/pull/4496
1,271,945,704
PR_kwDODunzps45sUnW
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
[]
closed
false
null
2
2022-06-15T09:29:16Z
2022-07-07T17:06:51Z
2022-07-07T16:55:48Z
null
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4496/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "html_url": "https://github.com/huggingface/datasets/pull/4496", "merged_at": "2022-07-07T16:55:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
https://api.github.com/repos/huggingface/datasets/issues/2227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2227/comments
https://api.github.com/repos/huggingface/datasets/issues/2227/events
https://github.com/huggingface/datasets/pull/2227
859,771,526
MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx
2,227
Use update_metadata_with_features decorator in class_encode_column method
[]
closed
false
null
0
2021-04-16T12:31:41Z
2021-04-16T13:49:40Z
2021-04-16T13:49:39Z
null
Following @mariosasko 's comment
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2227/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2227.diff", "html_url": "https://github.com/huggingface/datasets/pull/2227", "merged_at": "2021-04-16T13:49:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2227.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2227" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1350/comments
https://api.github.com/repos/huggingface/datasets/issues/1350/events
https://github.com/huggingface/datasets/pull/1350
759,879,789
MDExOlB1bGxSZXF1ZXN0NTM0ODA1OTY3
1,350
add LeNER-Br dataset
[]
closed
false
null
4
2020-12-09T00:06:38Z
2020-12-10T14:11:33Z
2020-12-10T14:11:33Z
null
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1350/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1350/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1350.diff", "html_url": "https://github.com/huggingface/datasets/pull/1350", "merged_at": "2020-12-10T14:11:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1350.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1350" }
true
[ "I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? 😕 ", "Looks like a flaky connection error, I've launched a re-run, it should be fine :)", "The RemoteDatasetTest error in the CI is just a connection error, we can ignor...
https://api.github.com/repos/huggingface/datasets/issues/1833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
https://api.github.com/repos/huggingface/datasets/issues/1833/events
https://github.com/huggingface/datasets/pull/1833
803,120,978
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
1,833
Add OSCAR dataset card
[]
closed
false
null
10
2021-02-08T01:39:49Z
2021-02-12T14:09:25Z
2021-02-12T14:08:24Z
null
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "html_url": "https://github.com/huggingface/datasets/pull/1833", "merged_at": "2021-02-12T14:08:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833" }
true
[ "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ", "I just merged the tables as suggested 😄 . However I noticed somet...
https://api.github.com/repos/huggingface/datasets/issues/3064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3064/comments
https://api.github.com/repos/huggingface/datasets/issues/3064/events
https://github.com/huggingface/datasets/issues/3064
1,023,900,075
I_kwDODunzps49B3mr
3,064
Make `interleave_datasets` more robust
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
3
2021-10-12T14:34:53Z
2022-07-30T08:47:26Z
null
null
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation of epoch since there are no way to track which dataset from `interleave_datasets` completes how many epoch. **Describe the solution you'd like** For `interleave_datasets` module, - [ ] Add a boolean argument `--stop-iter` in `interleave_datasets` that enables dataset to either iterate infinite amount of time or not. That means it should not return `StopIterator` exception in case `--stop-iter=False`. - [ ] Internal list variable `iter_cnt` that explains how many times (in steps/epochs) each dataset iterates at a given point. - [ ] Add an argument `--max-iter` (list type) that explain maximum amount of time each of the dataset can iterate. After complete `--max-iter` of one dataset, other dataset should continue sampling and when all the dataset finish their respective `--max-iter`, only then return `StopIterator` Note: I'm new to `datasets` api. May be these features are already there in the datasets. Since multitask training is the latest trends, I believe this feature would make the `datasets` api more popular. @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3064/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3064/timeline
null
null
null
null
false
[ "Hi @lhoestq Any response on this issue?", "Hi ! Sorry for the late response\r\n\r\nI agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop=\"first_exhausted\"` (default) or `stop=\"all_exhauste...
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
null
1
2022-11-24T09:42:10Z
2022-11-24T10:10:02Z
2022-11-24T10:10:02Z
null
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
null
completed
null
null
false
[ "- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github...
https://api.github.com/repos/huggingface/datasets/issues/5091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5091/comments
https://api.github.com/repos/huggingface/datasets/issues/5091/events
https://github.com/huggingface/datasets/pull/5091
1,401,112,552
PR_kwDODunzps5AZCm9
5,091
Allow connection objects in `from_sql` + small doc improvement
[]
closed
false
null
1
2022-10-07T12:39:44Z
2022-10-09T13:19:15Z
2022-10-09T13:16:57Z
null
Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string. PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~~ Done!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5091/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5091/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5091.diff", "html_url": "https://github.com/huggingface/datasets/pull/5091", "merged_at": "2022-10-09T13:16:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5091.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5091" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
https://api.github.com/repos/huggingface/datasets/issues/2179/events
https://github.com/huggingface/datasets/issues/2179
852,237,957
MDU6SXNzdWU4NTIyMzc5NTc=
2,179
Load small datasets in-memory instead of using memory map
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
closed
false
null
0
2021-04-07T09:58:16Z
2021-04-20T10:04:04Z
2021-04-20T10:04:03Z
null
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
https://api.github.com/repos/huggingface/datasets/issues/5745/events
https://github.com/huggingface/datasets/pull/5745
1,667,086,143
PR_kwDODunzps5ORE2n
5,745
[BUG FIX] Issue 5744
[]
open
false
null
3
2023-04-13T20:29:55Z
2023-04-21T15:22:43Z
null
null
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "html_url": "https://github.com/huggingface/datasets/pull/5745", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745" }
true
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only pa...
https://api.github.com/repos/huggingface/datasets/issues/3992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3992/comments
https://api.github.com/repos/huggingface/datasets/issues/3992/events
https://github.com/huggingface/datasets/issues/3992
1,177,946,153
I_kwDODunzps5GNggp
3,992
Image column is not decoded in map when using with with_transform
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-03-23T10:51:13Z
2022-12-13T16:59:06Z
2022-12-13T16:59:06Z
null
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3992/timeline
null
completed
null
null
false
[ "Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with...
https://api.github.com/repos/huggingface/datasets/issues/4862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4862/comments
https://api.github.com/repos/huggingface/datasets/issues/4862/events
https://github.com/huggingface/datasets/issues/4862
1,343,464,699
I_kwDODunzps5QE6T7
4,862
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2022-08-18T18:36:14Z
2022-08-31T09:25:08Z
2022-08-31T09:25:08Z
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug # The dataset function is as follows: from pathlib import Path from typing import Dict, List, Tuple import datasets import pandas as pd _CITATION = """\ """ _DATASETNAME = "jadi_ide" _DESCRIPTION = """\ """ _HOMEPAGE = "" _LICENSE = "Unknown" _URLS = { _DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx", } _SOURCE_VERSION = "1.0.0" class JaDi_Ide(datasets.GeneratorBasedBuilder): SOURCE_VERSION = datasets.Version(_SOURCE_VERSION) BUILDER_CONFIGS = [ NusantaraConfig( name="jadi_ide_source", version=SOURCE_VERSION, description="JaDi-Ide source schema", schema="source", subset_id="jadi_ide", ), ] DEFAULT_CONFIG_NAME = "source" def _info(self) -> datasets.DatasetInfo: if self.config.schema == "source": features = datasets.Features( { "id": datasets.Value("string"), "text": datasets.Value("string"), "label": datasets.Value("string") } ) return datasets.DatasetInfo( description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION, ) def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]: """Returns SplitGenerators.""" # Dataset does not have predetermined split, putting all as TRAIN urls = _URLS[_DATASETNAME] base_dir = Path(dl_manager.download_and_extract(urls)) data_files = {"train": base_dir} return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "filepath": data_files["train"], "split": "train", }, ), ] def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]: """Yields examples as (key, example) tuples.""" df = pd.read_excel(filepath, engine='openpyxl') df.columns = ["id", "text", "label"] if self.config.schema == "source": for row in df.itertuples(): ex = { "id": str(row.id), "text": row.text, "label": row.label, } yield row.id, ex ``` ## Expected results Expecting to load the dataset smoothly. ## Actual results File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split desc=f"Generating {split_info.name} split", File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples df = pd.read_excel(filepath, engine='openpyxl') File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs) AttributeError: 'xPath' object has no attribute 'read' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.4 - PyArrow version: 9.0.0 - Pandas version: 0.25.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4862/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4862/timeline
null
completed
null
null
false
[ "What's more, the downloaded data is actually a folder instead of an excel file.", "Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets...
https://api.github.com/repos/huggingface/datasets/issues/1238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1238/comments
https://api.github.com/repos/huggingface/datasets/issues/1238/events
https://github.com/huggingface/datasets/pull/1238
758,321,688
MDExOlB1bGxSZXF1ZXN0NTMzNTEzODUw
1,238
adding poem_sentiment
[]
closed
false
null
0
2020-12-07T09:11:52Z
2020-12-09T16:36:10Z
2020-12-09T16:02:45Z
null
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1238/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1238/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1238.diff", "html_url": "https://github.com/huggingface/datasets/pull/1238", "merged_at": "2020-12-09T16:02:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1238.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1238" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5414/comments
https://api.github.com/repos/huggingface/datasets/issues/5414/events
https://github.com/huggingface/datasets/issues/5414
1,525,733,818
I_kwDODunzps5a8Nm6
5,414
Sharding error with Multilingual LibriSpeech
[]
closed
false
null
4
2023-01-09T14:45:31Z
2023-01-18T14:09:04Z
2023-01-18T14:09:04Z
null
### Describe the bug Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace: ``` Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/2.1.0/1904af50f57a5c370c9364cc337699cfe496d4e9edcae6648a96be23086362d0... Downloading data files: 100% 3/3 [00:00<00:00, 107.23it/s] Downloading data files: 100% 1/1 [00:00<00:00, 35.08it/s] Downloading data files: 100% 6/6 [00:00<00:00, 303.36it/s] Downloading data files: 100% 3/3 [00:00<00:00, 130.37it/s] Downloading data files: 100% 1049/1049 [00:00<00:00, 4491.40it/s] Downloading data files: 100% 37/37 [00:00<00:00, 1096.78it/s] Downloading data files: 100% 40/40 [00:00<00:00, 1003.93it/s] Extracting data files: 100% 3/3 [00:11<00:00, 2.62s/it] Generating train split: 469942/0 [34:13<00:00, 273.21 examples/s] Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-74fa6d092bdc> in <module> ----> 1 mls = load_dataset(MLS_DATASET, 2 LANGUAGE, 3 cache_dir="~/datadrive/cache/huggingface/datasets", 4 ignore_verifications=True) /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1755 1756 # Download and prepare data -> 1757 builder_instance.download_and_prepare( 1758 download_config=download_config, 1759 download_mode=download_mode, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 858 if num_proc is not None: 859 prepare_split_kwargs["num_proc"] = num_proc --> 860 self._download_and_prepare( 861 dl_manager=dl_manager, 862 verify_infos=verify_infos, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1609 1610 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): ... RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_archives has length 1049 - key local_extracted_archive has length 1049 - key limited_ids_paths has length 1 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length. ``` ### Steps to reproduce the bug Here is the code to reproduce it: ```python from datasets import load_dataset MLS_DATASET = "facebook/multilingual_librispeech" LANGUAGE = "german" mls = load_dataset(MLS_DATASET, LANGUAGE, cache_dir="~/datadrive/cache/huggingface/datasets", ignore_verifications=True) ``` ### Expected behavior The expected behaviour is that the dataset is successfully loaded. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 10.0.1 - Pandas version: 1.2.4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5414/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5414/timeline
null
completed
null
null
false
[ "Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3", "Main issue:\r\n- #5415", "@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?", "Yes,...
https://api.github.com/repos/huggingface/datasets/issues/1861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
https://api.github.com/repos/huggingface/datasets/issues/1861/events
https://github.com/huggingface/datasets/pull/1861
805,631,215
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
1,861
Fix Limit url
[]
closed
false
null
0
2021-02-10T15:44:56Z
2021-02-10T16:15:00Z
2021-02-10T16:14:59Z
null
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "html_url": "https://github.com/huggingface/datasets/pull/1861", "merged_at": "2021-02-10T16:14:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2492/comments
https://api.github.com/repos/huggingface/datasets/issues/2492/events
https://github.com/huggingface/datasets/pull/2492
919,718,102
MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4
2,492
Eduge
[]
closed
false
null
0
2021-06-13T05:10:59Z
2021-06-22T09:49:04Z
2021-06-16T10:41:46Z
null
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2492/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2492.diff", "html_url": "https://github.com/huggingface/datasets/pull/2492", "merged_at": "2021-06-16T10:41:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2492" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2715/comments
https://api.github.com/repos/huggingface/datasets/issues/2715/events
https://github.com/huggingface/datasets/pull/2715
952,845,229
MDExOlB1bGxSZXF1ZXN0Njk2OTc5MjQ1
2,715
Update PAN-X data URL in XTREME dataset
[]
closed
false
null
1
2021-07-26T12:21:17Z
2021-07-26T13:27:59Z
2021-07-26T13:27:59Z
null
Related to #2710, #2691.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2715/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2715.diff", "html_url": "https://github.com/huggingface/datasets/pull/2715", "merged_at": "2021-07-26T13:27:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2715" }
true
[ "Merging since the CI is just about missing infos in the dataset card" ]
https://api.github.com/repos/huggingface/datasets/issues/6064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6064/comments
https://api.github.com/repos/huggingface/datasets/issues/6064/events
https://github.com/huggingface/datasets/pull/6064
1,818,703,725
PR_kwDODunzps5WPzAv
6,064
set dev version
[]
closed
false
null
3
2023-07-24T15:56:00Z
2023-07-24T16:05:19Z
2023-07-24T15:56:10Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6064/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6064.diff", "html_url": "https://github.com/huggingface/datasets/pull/6064", "merged_at": "2023-07-24T15:56:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/6064.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6064" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...