url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4008/comments | https://api.github.com/repos/huggingface/datasets/issues/4008/events | https://github.com/huggingface/datasets/pull/4008 | 1,179,591,068 | PR_kwDODunzps409Ixp | 4,008 | Support streaming daily_dialog dataset | [] | closed | false | null | 1 | 2022-03-24T14:23:23Z | 2022-03-24T15:29:01Z | 2022-03-24T14:46:58Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4008/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"merged_at": "2022-03-24T14:46:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008"
} | true | [
"Yay! I love this dataset!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4176/comments | https://api.github.com/repos/huggingface/datasets/issues/4176/events | https://github.com/huggingface/datasets/issues/4176 | 1,206,515,563 | I_kwDODunzps5H6fdr | 4,176 | Very slow between two operations | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-04-17T23:52:29Z | 2022-04-18T00:03:00Z | 2022-04-18T00:03:00Z | null | Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc = "running split para ==>")\
.filter(lambda example: example['text1']!='' and example['text2']!='',
num_proc=args.preprocessing_num_workers, desc="filtering ==>")
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset===>",
)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4176/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1275/comments | https://api.github.com/repos/huggingface/datasets/issues/1275/events | https://github.com/huggingface/datasets/pull/1275 | 758,958,066 | MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw | 1,275 | Yoruba GV NER added | [] | closed | false | null | 1 | 2020-12-08T00:31:38Z | 2020-12-08T23:25:28Z | 2020-12-08T23:25:28Z | null | I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1275/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1275.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1275",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1275.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1275"
} | true | [
"Thank you. Okay, I will add the dataset card."
] |
https://api.github.com/repos/huggingface/datasets/issues/5802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5802/comments | https://api.github.com/repos/huggingface/datasets/issues/5802/events | https://github.com/huggingface/datasets/pull/5802 | 1,686,509,799 | PR_kwDODunzps5PR199 | 5,802 | Validate non-empty data_files | [] | closed | false | null | 2 | 2023-04-27T09:51:36Z | 2023-04-27T14:59:47Z | 2023-04-27T14:51:40Z | null | This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5802/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"merged_at": "2023-04-27T14:51:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3445/comments | https://api.github.com/repos/huggingface/datasets/issues/3445/events | https://github.com/huggingface/datasets/issues/3445 | 1,082,370,968 | I_kwDODunzps5Ag6uY | 3,445 | question | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-12-16T15:57:00Z | 2022-01-03T10:09:00Z | 2022-01-03T10:09:00Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3445/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3445/timeline | null | completed | null | null | false | [
"Hi ! What's your question ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5462/comments | https://api.github.com/repos/huggingface/datasets/issues/5462/events | https://github.com/huggingface/datasets/pull/5462 | 1,556,572,144 | PR_kwDODunzps5Iglqu | 5,462 | Concatenate on axis=1 with misaligned blocks | [] | closed | false | null | 4 | 2023-01-25T12:33:22Z | 2023-01-26T09:37:00Z | 2023-01-26T09:27:19Z | null | Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5462/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"merged_at": "2023-01-26T09:27:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/3986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3986/comments | https://api.github.com/repos/huggingface/datasets/issues/3986/events | https://github.com/huggingface/datasets/issues/3986 | 1,176,429,565 | I_kwDODunzps5GHuP9 | 3,986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 5 | 2022-03-22T08:23:21Z | 2023-03-06T16:55:04Z | null | null | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3986/timeline | null | null | null | null | false | [
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datas... |
https://api.github.com/repos/huggingface/datasets/issues/3335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3335/comments | https://api.github.com/repos/huggingface/datasets/issues/3335/events | https://github.com/huggingface/datasets/pull/3335 | 1,066,064,126 | PR_kwDODunzps4vISGy | 3,335 | add Speech commands dataset | [] | closed | false | null | 11 | 2021-11-29T13:52:47Z | 2021-12-10T10:37:21Z | 2021-12-10T10:30:15Z | null | closes #3283 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3335/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3335",
"merged_at": "2021-12-10T10:30:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3335"
} | true | [
"@anton-l ping",
"@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! 🤗\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorro... |
https://api.github.com/repos/huggingface/datasets/issues/2347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2347/comments | https://api.github.com/repos/huggingface/datasets/issues/2347/events | https://github.com/huggingface/datasets/issues/2347 | 887,404,868 | MDU6SXNzdWU4ODc0MDQ4Njg= | 2,347 | Add an API to access the language and pretty name of a dataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 6 | 2021-05-11T14:10:08Z | 2022-10-05T17:16:54Z | 2022-10-05T17:16:53Z | null | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2347/timeline | null | completed | null | null | false | [
"Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\... |
https://api.github.com/repos/huggingface/datasets/issues/5585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5585/comments | https://api.github.com/repos/huggingface/datasets/issues/5585/events | https://github.com/huggingface/datasets/issues/5585 | 1,602,190,030 | I_kwDODunzps5ff3rO | 5,585 | Cache is not transportable | [] | closed | false | null | 2 | 2023-02-28T00:53:06Z | 2023-02-28T21:26:52Z | 2023-02-28T21:26:52Z | null | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5585/timeline | null | completed | null | null | false | [
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because ... |
https://api.github.com/repos/huggingface/datasets/issues/5181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5181/comments | https://api.github.com/repos/huggingface/datasets/issues/5181/events | https://github.com/huggingface/datasets/issues/5181 | 1,431,027,102 | I_kwDODunzps5VS72e | 5,181 | Add a guide for semantic segmentation | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 2 | 2022-11-01T07:54:50Z | 2022-11-04T18:23:36Z | 2022-11-04T18:23:36Z | null | Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5181/timeline | null | completed | null | null | false | [
"Sure this sounds great! Would this be pure torchvision, albumentations, or something else?",
"I am considering `torchvision` and `albumentations`. Also [works with TensorFlow](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_Finetune.ipynb). \r\n\r\nI am assigning the issue... |
https://api.github.com/repos/huggingface/datasets/issues/2762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2762/comments | https://api.github.com/repos/huggingface/datasets/issues/2762/events | https://github.com/huggingface/datasets/issues/2762 | 961,652,046 | MDU6SXNzdWU5NjE2NTIwNDY= | 2,762 | Add RVL-CDIP dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | closed | false | null | 3 | 2021-08-05T09:57:05Z | 2022-04-21T17:15:41Z | 2022-04-21T17:15:41Z | null | ## Adding a Dataset
- **Name:** RVL-CDIP
- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
- **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/
- **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/
- **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2762/timeline | null | completed | null | null | false | [
"cc @nateraw ",
"#self-assign",
"[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. That’s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) r... |
https://api.github.com/repos/huggingface/datasets/issues/3557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3557/comments | https://api.github.com/repos/huggingface/datasets/issues/3557/events | https://github.com/huggingface/datasets/pull/3557 | 1,097,946,034 | PR_kwDODunzps4wvIHl | 3,557 | Fix bug in `ImageClassifcation` task template | [] | closed | false | null | 3 | 2022-01-10T14:09:59Z | 2022-01-11T15:47:52Z | 2022-01-11T15:47:52Z | null | Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling.
CC: @lewtun @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3557/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3557",
"merged_at": "2022-01-11T15:47:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3557"
} | true | [
"The CI failures are unrelated to the changes in this PR.",
"> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstre... |
https://api.github.com/repos/huggingface/datasets/issues/4125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4125/comments | https://api.github.com/repos/huggingface/datasets/issues/4125/events | https://github.com/huggingface/datasets/pull/4125 | 1,196,633,936 | PR_kwDODunzps411qeR | 4,125 | BIG-bench | [] | closed | false | null | 21 | 2022-04-07T22:33:30Z | 2022-06-08T17:57:48Z | 2022-06-08T17:32:32Z | null | This PR adds all BIG-bench json tasks to huggingface/datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4125/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4125",
"merged_at": "2022-06-08T17:32:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4125"
} | true | [
"> It looks like the CI is failing on windows because our windows CI is unable to clone the bigbench repository (maybe it has to do with filenames that are longer than 256 characters, which windows don't like). Could the smaller installation of bigbench via pip solve this issue ?\r\n> Otherwise we can see how to re... |
https://api.github.com/repos/huggingface/datasets/issues/2926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2926/comments | https://api.github.com/repos/huggingface/datasets/issues/2926/events | https://github.com/huggingface/datasets/issues/2926 | 997,463,277 | I_kwDODunzps47dBTt | 2,926 | Error when downloading datasets to non-traditional cache directories | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2021-09-15T19:59:46Z | 2021-11-24T21:42:31Z | null | null | ## Describe the bug
When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails.
## Steps to reproduce the bug
```bash
ln -s /path/to/netapp/.cache ~/.cache
```
```python
load_dataset("imdb")
```
## Expected results
Successfully loading IMDB dataset
## Actual results
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835,
num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,
dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),
'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':
SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':
SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.1.2
- Platform: Ubuntu
- Python version: 3.8
## Extra notes
Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:
- With `cache_dir="/path/to/netapp/.cache"` the same thing happens.
- However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work
- On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore.
While I could test it only for a NetApp device, it might have to do with any other mounted FS.
Thanks :)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2926/timeline | null | null | null | null | false | [
"Same here !"
] |
https://api.github.com/repos/huggingface/datasets/issues/822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/822/comments | https://api.github.com/repos/huggingface/datasets/issues/822/events | https://github.com/huggingface/datasets/issues/822 | 739,579,314 | MDU6SXNzdWU3Mzk1NzkzMTQ= | 822 | datasets freezes | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 2 | 2020-11-10T05:10:19Z | 2023-07-20T16:08:14Z | 2023-07-20T16:08:13Z | null | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/822/timeline | null | completed | null | null | false | [
"Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text col... |
https://api.github.com/repos/huggingface/datasets/issues/4910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4910/comments | https://api.github.com/repos/huggingface/datasets/issues/4910/events | https://github.com/huggingface/datasets/issues/4910 | 1,354,374,328 | I_kwDODunzps5Quhy4 | 4,910 | Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder() | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | open | false | null | 7 | 2022-08-29T14:11:48Z | 2022-09-13T11:58:46Z | null | null | ## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be
```python
builder_cls = import_main_class(dataset_module.module_path)
builder_kwargs = dataset_module.builder_kwargs
data_files = builder_kwargs.pop("data_files", data_files)
config_name = builder_kwargs.pop("config_name", name)
hash = builder_kwargs.pop("hash")
base_path = builder_kwargs.pop("base_path")
```
and then pass base_path into `builder_cls`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
## Expected results
The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder).
So I would expect to be able to pass the base_path into `load_dataset()`.
## Actual results
TypeError("type object got multiple values for keyword argument "base_path").
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- PyArrow version: 9.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4910/timeline | null | null | null | null | false | [
"I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0",... |
https://api.github.com/repos/huggingface/datasets/issues/4355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4355/comments | https://api.github.com/repos/huggingface/datasets/issues/4355/events | https://github.com/huggingface/datasets/pull/4355 | 1,236,797,490 | PR_kwDODunzps433EgP | 4,355 | Fix warning in upload_file | [] | closed | false | null | 1 | 2022-05-16T08:21:31Z | 2022-05-16T11:28:02Z | 2022-05-16T11:19:57Z | null | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4355/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"merged_at": "2022-05-16T11:19:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3373/comments | https://api.github.com/repos/huggingface/datasets/issues/3373/events | https://github.com/huggingface/datasets/issues/3373 | 1,070,406,391 | I_kwDODunzps4_zRr3 | 3,373 | Support streaming zipped CSV dataset repo by passing only repo name | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-12-03T09:48:24Z | 2021-12-16T18:03:31Z | 2021-12-16T18:03:31Z | null | Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`:
```
ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab"
ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True)
item = next(iter(ds))
```
Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL:
```
'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3373/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3373/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5551/comments | https://api.github.com/repos/huggingface/datasets/issues/5551/events | https://github.com/huggingface/datasets/pull/5551 | 1,592,140,836 | PR_kwDODunzps5KXCof | 5,551 | Suggest scikit-learn instead of sklearn | [] | closed | false | null | 4 | 2023-02-20T16:16:57Z | 2023-02-21T13:27:57Z | 2023-02-21T13:21:07Z | null | This is kinda unimportant fix but, the suggested `pip install sklearn` does not work.
The current error message if sklearn is not installed:
```
ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn.
Please install it using 'pip install sklearn' for instance.
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5551/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5551",
"merged_at": "2023-02-21T13:21:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5551"
} | true | [
"good catch!",
"_The documentation is not available anymore as the PR was closed or merged._",
"The test fail is unrelated to this PR and fixed on `main` - merging :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: ... |
https://api.github.com/repos/huggingface/datasets/issues/805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/805/comments | https://api.github.com/repos/huggingface/datasets/issues/805/events | https://github.com/huggingface/datasets/issues/805 | 737,019,360 | MDU6SXNzdWU3MzcwMTkzNjA= | 805 | On loading a metric from datasets, I get the following error | [] | closed | false | null | 1 | 2020-11-05T15:14:38Z | 2022-02-14T15:32:59Z | 2022-02-14T15:32:59Z | null | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/805/timeline | null | completed | null | null | false | [
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2755/comments | https://api.github.com/repos/huggingface/datasets/issues/2755/events | https://github.com/huggingface/datasets/pull/2755 | 959,115,888 | MDExOlB1bGxSZXF1ZXN0NzAyMjgwMjI4 | 2,755 | Fix metadata JSON for turkish_movie_sentiment dataset | [] | closed | false | null | 0 | 2021-08-03T13:25:44Z | 2021-08-04T09:06:54Z | 2021-08-04T09:06:53Z | null | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2755/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2755.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2755",
"merged_at": "2021-08-04T09:06:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2755.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2755"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | [] | closed | false | null | 1 | 2021-02-08T12:26:35Z | 2021-02-08T12:42:50Z | 2021-02-08T12:42:50Z | null | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | true | [
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5976/comments | https://api.github.com/repos/huggingface/datasets/issues/5976/events | https://github.com/huggingface/datasets/pull/5976 | 1,768,503,913 | PR_kwDODunzps5TmAFp | 5,976 | Avoid stuck map operation when subprocesses crashes | [] | closed | false | null | 11 | 2023-06-21T21:18:31Z | 2023-07-10T09:58:39Z | 2023-07-10T09:50:07Z | null | I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish.
It seems to be easy to reproduce the issue with the following script:
```
import os
from datasets import Dataset, Features, Value
def do_stuck(item):
os.kill(os.getpid(), 9)
data = {
"col1": list(range(5)),
"col2": list(range(5)),
}
ds = Dataset.from_dict(
data,
features=Features({
"col1": Value("int64"),
"col2": Value("int64"),
}),
)
print(ds.map(do_stuck, num_proc=4))
```
This is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https://bugs.python.org/issue9205)), but not in `multiprocessing.pool.Pool` / `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https://bugs.python.org/issue22393)).
This PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller.
EDIT: Related proposal for future improvement: https://github.com/huggingface/datasets/discussions/5977 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5976/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5976",
"merged_at": "2023-07-10T09:50:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5976"
} | true | [
"Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)",
"@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiproc... |
https://api.github.com/repos/huggingface/datasets/issues/4279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4279/comments | https://api.github.com/repos/huggingface/datasets/issues/4279/events | https://github.com/huggingface/datasets/pull/4279 | 1,225,300,273 | PR_kwDODunzps43SXw5 | 4,279 | Update minimal PyArrow version warning | [] | closed | false | null | 1 | 2022-05-04T12:26:09Z | 2022-05-05T08:50:58Z | 2022-05-05T08:43:47Z | null | Update the minimal PyArrow version warning (should've been part of #4250). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4279/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4279",
"merged_at": "2022-05-05T08:43:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4279"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2222/comments | https://api.github.com/repos/huggingface/datasets/issues/2222/events | https://github.com/huggingface/datasets/pull/2222 | 857,847,231 | MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5 | 2,222 | Fix too long WindowsFileLock name | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | 3 | 2021-04-14T12:26:52Z | 2021-04-14T15:00:25Z | 2021-04-14T14:46:19Z | null | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2222/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2222.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2222",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2222.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2222"
} | true | [
"Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.",
"Do you agree with handling the case where MAX_PATH is not d... |
https://api.github.com/repos/huggingface/datasets/issues/6017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6017/comments | https://api.github.com/repos/huggingface/datasets/issues/6017/events | https://github.com/huggingface/datasets/issues/6017 | 1,799,309,132 | I_kwDODunzps5rP0dM | 6,017 | Switch to huggingface_hub's HfFileSystem | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2023-07-11T16:24:40Z | 2023-07-17T17:01:01Z | 2023-07-17T17:01:01Z | null | instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6017/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1570/comments | https://api.github.com/repos/huggingface/datasets/issues/1570/events | https://github.com/huggingface/datasets/pull/1570 | 766,830,545 | MDExOlB1bGxSZXF1ZXN0NTM5NzM1MDY2 | 1,570 | Documentation for loading CSV datasets misleads the user | [] | closed | false | null | 0 | 2020-12-14T19:04:37Z | 2020-12-22T19:30:12Z | 2020-12-21T13:47:09Z | null | Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting.
There are two problems here:
i) `quote_char' is misspelled, must be `quotechar'
ii) the documentation should mention `quoting' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1570/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1570",
"merged_at": "2020-12-21T13:47:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1570"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2007/comments | https://api.github.com/repos/huggingface/datasets/issues/2007/events | https://github.com/huggingface/datasets/issues/2007 | 824,518,158 | MDU6SXNzdWU4MjQ1MTgxNTg= | 2,007 | How to not load huggingface datasets into memory | [] | closed | false | null | 2 | 2021-03-08T12:35:26Z | 2021-08-04T18:02:25Z | 2021-08-04T18:02:25Z | null | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2007/timeline | null | completed | null | null | false | [
"So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ",
"The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without ... |
https://api.github.com/repos/huggingface/datasets/issues/2591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2591/comments | https://api.github.com/repos/huggingface/datasets/issues/2591/events | https://github.com/huggingface/datasets/issues/2591 | 936,957,975 | MDU6SXNzdWU5MzY5NTc5NzU= | 2,591 | Cached dataset overflowing disk space | [] | closed | false | null | 4 | 2021-07-05T10:43:19Z | 2021-07-19T09:08:19Z | 2021-07-19T09:08:19Z | null | I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files).
This might not technically be a bug, but I was unsure and I felt that the bug was the closest one.
Traceback (most recent call last):
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single
writer.finalize()
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize
self.pa_writer.close()
File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close
File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status
OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
"""
The above exception was the direct cause of the following exception:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2591/timeline | null | completed | null | null | false | [
"Hi! I'm transferring this issue over to `datasets`",
"I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n",
"Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be store... |
https://api.github.com/repos/huggingface/datasets/issues/1711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1711/comments | https://api.github.com/repos/huggingface/datasets/issues/1711/events | https://github.com/huggingface/datasets/pull/1711 | 782,129,083 | MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2 | 1,711 | Fix windows path scheme in cached path | [] | closed | false | null | 0 | 2021-01-08T13:45:56Z | 2021-01-11T09:23:20Z | 2021-01-11T09:23:19Z | null | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1711/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"merged_at": "2021-01-11T09:23:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/620/comments | https://api.github.com/repos/huggingface/datasets/issues/620/events | https://github.com/huggingface/datasets/issues/620 | 699,815,135 | MDU6SXNzdWU2OTk4MTUxMzU= | 620 | map/filter multiprocessing raises errors and corrupts datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 22 | 2020-09-11T22:30:06Z | 2020-10-08T16:31:47Z | 2020-10-08T16:31:46Z | null | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
rel_ds_dict["validation"] = rel_ds_dict["test"]
return ner_ds_dict, rel_ds_dict
```
The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable.
The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`.
Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads.
I also see errors with other map and filter functions when `num_proc` is set.
```
Done writing 67 indices in 536 bytes .
Done writing 67 indices in 536 bytes .
Fatal Python error: PyCOND_WAIT(gil_cond) failed
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/620/timeline | null | completed | null | null | false | [
"It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = col... |
https://api.github.com/repos/huggingface/datasets/issues/154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/154/comments | https://api.github.com/repos/huggingface/datasets/issues/154/events | https://github.com/huggingface/datasets/pull/154 | 620,059,066 | MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw | 154 | add Ubuntu Dialogs Corpus datasets | [] | closed | false | null | 0 | 2020-05-18T09:34:48Z | 2020-05-18T10:12:28Z | 2020-05-18T10:12:27Z | null | This PR adds the Ubuntu Dialog Corpus datasets version 2.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/154/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/154",
"merged_at": "2020-05-18T10:12:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/154"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4755/comments | https://api.github.com/repos/huggingface/datasets/issues/4755/events | https://github.com/huggingface/datasets/issues/4755 | 1,319,687,044 | I_kwDODunzps5OqNOE | 4,755 | Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2022-07-27T14:54:11Z | 2022-07-27T17:57:28Z | null | null | ## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.
## Steps to reproduce the bug
1. Make a dataset of 3 strings.
2. Tokenize via Dataset.map with n_proc = 8
3. Inspect the `overflow_to_sample_mapping` field
## Expected results
`[0, 1, 2]`
## Actual results
`[0, 0, 0]`
Notes:
1. I have not yet extracted a minimal example, but the above works reliably
2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4755/timeline | null | null | null | null | false | [
"I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset/tinyroberta-squad2'\r\ntokenizer = t... |
https://api.github.com/repos/huggingface/datasets/issues/504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/504/comments | https://api.github.com/repos/huggingface/datasets/issues/504/events | https://github.com/huggingface/datasets/pull/504 | 678,756,211 | MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5 | 504 | Added downloading to Hyperpartisan news detection | [] | closed | false | null | 2 | 2020-08-13T21:53:46Z | 2020-08-27T08:18:41Z | 2020-08-27T08:18:41Z | null | Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/504/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/504.diff",
"html_url": "https://github.com/huggingface/datasets/pull/504",
"merged_at": "2020-08-27T08:18:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/504.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/504"
} | true | [
"Thank you @ghomasHudson for making our dataset available! This is great!",
"The test passes since #527 :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4213/comments | https://api.github.com/repos/huggingface/datasets/issues/4213/events | https://github.com/huggingface/datasets/pull/4213 | 1,214,510,010 | PR_kwDODunzps42uft_ | 4,213 | ETT time series dataset | [] | closed | false | null | 2 | 2022-04-25T13:26:18Z | 2022-05-05T12:19:21Z | 2022-05-05T12:10:35Z | null | Ready for review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4213/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4213",
"merged_at": "2022-05-05T12:10:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4213"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3255/comments | https://api.github.com/repos/huggingface/datasets/issues/3255/events | https://github.com/huggingface/datasets/issues/3255 | 1,051,783,129 | I_kwDODunzps4-sO_Z | 3,255 | SciELO dataset ConnectionError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-11-12T09:57:14Z | 2021-11-16T17:55:22Z | 2021-11-16T17:55:22Z | null | ## Describe the bug
I get `ConnectionError` when I am trying to load the SciELO dataset.
When I try the URL with `requests` I get:
```
>>> requests.head("https://ndownloader.figstatic.com/files/14019287")
<Response [302]>
```
And as far as I understand redirections in `datasets` are not supported for downloads.
https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("scielo", "en-es")
```
## Expected results
Download SciELO dataset and load Dataset object
## Actual results
```
Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e...
Traceback (most recent call last):
File "scielo.py", line 3, in <module>
dataset = load_dataset("scielo", "en-es")
File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators
data_dir = dl_manager.download_and_extract(_URLS[self.config.name])
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 6.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3255/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4650/comments | https://api.github.com/repos/huggingface/datasets/issues/4650/events | https://github.com/huggingface/datasets/issues/4650 | 1,296,680,037 | I_kwDODunzps5NScRl | 4,650 | Add SPECTER dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2022-07-07T01:41:32Z | 2022-07-14T02:07:49Z | null | null | ## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4650/timeline | null | null | null | null | false | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3206/comments | https://api.github.com/repos/huggingface/datasets/issues/3206/events | https://github.com/huggingface/datasets/pull/3206 | 1,044,216,270 | PR_kwDODunzps4uEZJe | 3,206 | [WIP] Allow user-defined hash functions via a registry | [] | closed | false | null | 13 | 2021-11-03T23:25:42Z | 2021-11-05T12:38:11Z | 2021-11-05T12:38:04Z | null | Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object.
As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself.
This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue).
Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added.
**utils.registry** (added)
This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g.
```python
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
```
You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below).
**utils.py_utils** (modified)
Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings.
**fingerprint** (modified)
Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`.
```python
# Check if the current object is an instance that is
# applicable to the user-defined hashers. If so, hash
# with the user-defined function
for full_module_name, func in hashers.get_all().items():
registered_cls = get_cls_from_qualname(full_module_name)
if isinstance(value, registered_cls):
return func(value)
```
**Putting it all together**
To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit.
```shell
git clone https://github.com/explosion/spaCy.git
cd spaCy/
git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf
cd ..
git clone https://github.com/BramVanroy/datasets.git
cd datasets
git checkout registry
pip install -e .
pip install ../spaCy
spacy download en_core_web_sm
```
Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`.
```python
import spacy
from datasets.fingerprint import Hasher
from datasets.utils.registry import hashers
# Register a function so that when the Hasher encounters a spacy.Language object
# it uses this custom function to hash instead of the default
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
def main():
print(hashers.get_all())
nlp = spacy.load("en_core_web_sm")
dump1 = Hasher.hash(nlp)
nlp = spacy.load("en_core_web_sm")
dump2 = Hasher.hash(nlp)
print(dump1)
# succeeds when using the registered custom function
# fails if using the default
assert dump1 == dump2
if __name__ == '__main__':
main()
```
To do
====
- The above is just a proof-of-concept. I am open to changes/suggestions
- Tests still need to be written
- We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects.
- Maybe the `hashers` definition is better suited in `fingerprint`?
- Documentation/examples need to be updated
- Not sure why the logger is not working in `hash()`
- `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3206/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3206.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3206",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3206.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3206"
} | true | [
"Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"@albertvillan... |
https://api.github.com/repos/huggingface/datasets/issues/5 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5/comments | https://api.github.com/repos/huggingface/datasets/issues/5/events | https://github.com/huggingface/datasets/issues/5 | 600,295,889 | MDU6SXNzdWU2MDAyOTU4ODk= | 5 | ValueError when a split is empty | [] | closed | false | null | 3 | 2020-04-15T13:25:13Z | 2020-04-29T09:23:05Z | 2020-04-29T09:23:05Z | null | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset
datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp>
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset
split=split,
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset
split_infos=self.info.splits.values(),
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read
return py_utils.map_nested(_read_instruction_to_ds, instructions)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds
file_instructions = make_file_instructions(name, split_infos, instruction)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions
absolute_instructions=absolute_instructions,
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes
'Split empty. This might means that dataset hasn\'t been generated '
ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.
```
How to reproduce:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
VERSION = nlp.Version("1.0.0")
def __init__(self, **config):
self.train = config.pop("train", None)
self.validation = config.pop("validation", None)
super(Bbc, self).__init__(**config)
def _info(self):
return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}),
nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})]
def _generate_examples(self, filepath):
if not filepath:
return None, {}
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"id": idx, "text": line[1], "label": line[0]}
```
```python
import nlp
dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5/timeline | null | completed | null | null | false | [
"To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolu... |
https://api.github.com/repos/huggingface/datasets/issues/1723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1723/comments | https://api.github.com/repos/huggingface/datasets/issues/1723/events | https://github.com/huggingface/datasets/pull/1723 | 783,982,100 | MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1 | 1,723 | ADD S3 support for downloading and uploading processed datasets | [] | closed | false | null | 1 | 2021-01-12T07:17:34Z | 2021-01-26T17:02:08Z | 2021-01-26T17:02:08Z | null | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Loading `csv` or `json` datasets from s3 is not implemented.
To save/load datasets to s3 you either need to provide an `aws_profile`, which is set up on your machine, per default it uses the `default` profile or you have to pass an `aws_access_key_id` and `aws_secret_access_key`.
The implementation was done with the `fsspec` and `boto3`.
### Example `aws_profile` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
```
</details>
### Example `aws_access_key_id` and `aws_secret_access_key` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
```
</details>
If you want to load a dataset from a public s3 bucket you can pass `anon=True`
### Example `anon=True` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucketdatasets/sdk",anon=True)
```
</details>
### Full Example
```python
import datasets
dataset = datasets.load_dataset("imdb")
print(f"DatasetDict contains {len(dataset)} datasets")
print(f"train Dataset has the size of: {len(dataset['train'])}")
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
remote_dataset = datasets.load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
print(f"DatasetDict contains {len(remote_dataset)} datasets")
print(f"train Dataset has the size of: {len(remote_dataset['train'])}")
```
Related to #878
I would also adjust the documentation after the code would be reviewed, as long as I leave the PR in "draft" status. Something that we can consider is renaming the functions and changing the `_disk` maybe to `_filesystem`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1723/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"merged_at": "2021-01-26T17:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723"
} | true | [
"I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystem... |
https://api.github.com/repos/huggingface/datasets/issues/4275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4275/comments | https://api.github.com/repos/huggingface/datasets/issues/4275/events | https://github.com/huggingface/datasets/issues/4275 | 1,224,943,414 | I_kwDODunzps5JAyc2 | 4,275 | CommonSenseQA has missing and inconsistent field names | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | 1 | 2022-05-04T05:38:59Z | 2022-05-04T11:41:18Z | null | null | ## Describe the bug
In short, CommonSenseQA implementation is inconsistent with the original dataset.
More precisely, we need to:
1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id.
2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it
3. Add the missing "question_concept" field in the question tree node
4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original
## Expected results
Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4275/timeline | null | null | null | null | false | [
"Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3747/comments | https://api.github.com/repos/huggingface/datasets/issues/3747/events | https://github.com/huggingface/datasets/issues/3747 | 1,141,688,854 | I_kwDODunzps5EDMoW | 3,747 | Passing invalid subset should throw an error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2022-02-17T18:16:11Z | 2022-02-17T18:16:11Z | null | null | ## Describe the bug
Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('rotten_tomatoes', 'asdfasdfa')
```
## Expected results
This should break, since `'asdfasdfa'` isn't a subset of the `rotten_tomatoes` dataset.
## Actual results
This API call silently succeeds. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3747/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3747/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/781/comments | https://api.github.com/repos/huggingface/datasets/issues/781/events | https://github.com/huggingface/datasets/pull/781 | 733,168,609 | MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw | 781 | Add XNLI train set | [] | closed | false | null | 5 | 2020-10-30T13:21:53Z | 2022-06-09T23:26:46Z | 2020-11-09T18:22:49Z | null | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/781/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"merged_at": "2020-11-09T18:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781"
} | true | [
"Hi! Thanks for adding the translated MNLI! Do you know what translations system / model you used when you created the datasets in the other languages?",
"According to the [paper](https://arxiv.org/pdf/1809.05053.pdf) it's the result of the work of professional translators ;)",
"Thanks for getting back to me.\n... |
https://api.github.com/repos/huggingface/datasets/issues/3618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3618/comments | https://api.github.com/repos/huggingface/datasets/issues/3618/events | https://github.com/huggingface/datasets/issues/3618 | 1,112,123,365 | I_kwDODunzps5CSafl | 3,618 | TIMIT Dataset not working with GPU | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-01-24T03:26:03Z | 2023-07-25T15:20:20Z | 2023-07-25T15:20:20Z | null | ## Describe the bug
I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU.
I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU).
I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance.
This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit_train = load_dataset('timit_asr', split='train')
print(timit_train['audio'])
```
## Expected results
Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need.
## Actual results
Traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-ceeac555e921> in <module>
----> 1 timit_train['audio']
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1918 return self._getitem(
-> 1919 key,
1920 )
1921
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1903 formatted_output = format_table(
-> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1905 )
1906 return formatted_output
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
529 python_formatter = PythonFormatter(features=None)
530 if format_columns is None:
--> 531 return formatter(pa_table, query_type=query_type)
532 elif query_type == "column":
533 if key in format_columns:
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
280 return self.format_row(pa_table)
281 elif query_type == "column":
--> 282 return self.format_column(pa_table)
283 elif query_type == "batch":
284 return self.format_batch(pa_table)
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table)
315 column = self.python_arrow_extractor().extract_column(pa_table)
316 if self.decoded:
--> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
318 return column
319
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name)
221
222 def decode_column(self, column: list, column_name: str) -> list:
--> 223 return self.features.decode_column(column, column_name) if self.features else column
224
225 def decode_batch(self, batch: dict) -> dict:
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name)
1337 return (
1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
-> 1339 if self._column_requires_decoding[column_name]
1340 else column
1341 )
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0)
1336 """
1337 return (
-> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
1339 if self._column_requires_decoding[column_name]
1340 else column
/opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
85 dict
86 """
---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None)
88 if path is None and file is None:
89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.")
TypeError: string indices must be integers
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.0
- Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3618/timeline | null | completed | null | null | false | [
"Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"... |
https://api.github.com/repos/huggingface/datasets/issues/5945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5945/comments | https://api.github.com/repos/huggingface/datasets/issues/5945/events | https://github.com/huggingface/datasets/issues/5945 | 1,754,084,577 | I_kwDODunzps5ojTTh | 5,945 | Failing to upload dataset to the hub | [] | closed | false | null | 3 | 2023-06-13T05:46:46Z | 2023-07-24T11:56:40Z | 2023-07-24T11:56:40Z | null | ### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5945/timeline | null | completed | null | null | false | [
"Hi ! Feel free to re-run your code later, it will resume automatically where you left",
"Tried many times in the last 2 weeks, problem remains.",
"Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\... |
https://api.github.com/repos/huggingface/datasets/issues/2441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2441/comments | https://api.github.com/repos/huggingface/datasets/issues/2441/events | https://github.com/huggingface/datasets/issues/2441 | 908,554,713 | MDU6SXNzdWU5MDg1NTQ3MTM= | 2,441 | DuplicatedKeysError on personal dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-06-01T17:59:41Z | 2021-06-04T23:50:03Z | 2021-06-04T23:50:03Z | null | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2441/timeline | null | completed | null | null | false | [
"Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r... |
https://api.github.com/repos/huggingface/datasets/issues/4155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4155/comments | https://api.github.com/repos/huggingface/datasets/issues/4155/events | https://github.com/huggingface/datasets/pull/4155 | 1,202,183,608 | PR_kwDODunzps42Hqam | 4,155 | Make HANS dataset streamable | [] | closed | false | null | 1 | 2022-04-12T17:34:13Z | 2022-04-13T12:03:46Z | 2022-04-13T11:57:35Z | null | Fix #4133 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4155/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"merged_at": "2022-04-13T11:57:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5865/comments | https://api.github.com/repos/huggingface/datasets/issues/5865/events | https://github.com/huggingface/datasets/pull/5865 | 1,710,455,738 | PR_kwDODunzps5QiHnw | 5,865 | Deprecate task api | [] | closed | false | null | 9 | 2023-05-15T16:48:24Z | 2023-07-10T12:33:59Z | 2023-07-10T12:24:01Z | null | The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L262) and [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/tensorflow/image-classification/run_image_classification.py#L277)
* autotrain: [here](https://github.com/huggingface/autotrain-backend/blob/455e274004b56f9377d64db4ab03671508fcc4cd/zeus/zeus/run/utils.py#L666)
* api-inference-community: [here](https://github.com/huggingface/api-inference-community/blob/fb8fb29d577a5bf01c82944db745489a6d6ed3d4/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)
So we need to update these files after the merge.
cc @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5865/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5865/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5865",
"merged_at": "2023-07-10T12:24:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5865"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"If it's easy to keep supporting it we can keep it no ? There are many datasets on the hub that implement the tasks templates in dataset scripts and it's maybe easier to keep task templates than opening PRs to those datasets.",
"do ... |
https://api.github.com/repos/huggingface/datasets/issues/570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/570/comments | https://api.github.com/repos/huggingface/datasets/issues/570/events | https://github.com/huggingface/datasets/pull/570 | 691,846,397 | MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz | 570 | add reuters21578 dataset | [] | closed | false | null | 0 | 2020-09-03T10:25:47Z | 2020-09-03T10:46:52Z | 2020-09-03T10:46:51Z | null | Reopen a PR this the merge. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/570/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"merged_at": "2020-09-03T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5253/comments | https://api.github.com/repos/huggingface/datasets/issues/5253/events | https://github.com/huggingface/datasets/pull/5253 | 1,452,588,206 | PR_kwDODunzps5DE2io | 5,253 | typo | [] | closed | false | null | 0 | 2022-11-17T02:22:58Z | 2022-11-18T10:53:11Z | 2022-11-18T10:53:10Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5253/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5253",
"merged_at": "2022-11-18T10:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5253"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1366/comments | https://api.github.com/repos/huggingface/datasets/issues/1366/events | https://github.com/huggingface/datasets/pull/1366 | 760,205,506 | MDExOlB1bGxSZXF1ZXN0NTM1MDc1ODU2 | 1,366 | Adding Hope EDI dataset | [] | closed | false | null | 1 | 2020-12-09T10:30:23Z | 2020-12-14T14:27:57Z | 2020-12-14T14:27:57Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1366/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1366",
"merged_at": "2020-12-14T14:27:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1366"
} | true | [
"@lhoestq Have addressed your comments. Please review. Thanks."
] | |
https://api.github.com/repos/huggingface/datasets/issues/1945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1945/comments | https://api.github.com/repos/huggingface/datasets/issues/1945/events | https://github.com/huggingface/datasets/issues/1945 | 816,421,966 | MDU6SXNzdWU4MTY0MjE5NjY= | 1,945 | AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' | [] | closed | false | null | 1 | 2021-02-25T13:09:45Z | 2021-02-25T13:20:35Z | 2021-02-25T13:20:26Z | null | Hi
I am trying to concatenate a list of huggingface datastes as:
` train_dataset = datasets.concatenate_datasets(train_datasets)
`
Here is the `train_datasets` when I print:
```
[Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 120361
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2670
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 6944
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 38140
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 173711
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 1655
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 4274
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2019
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2109
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 11963
})]
```
I am getting the following error:
`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
`
I was wondering if you could help me with this issue, thanks a lot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1945/timeline | null | completed | null | null | false | [
"sorry my mistake, datasets were overwritten closing now, thanks a lot"
] |
https://api.github.com/repos/huggingface/datasets/issues/3846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3846/comments | https://api.github.com/repos/huggingface/datasets/issues/3846/events | https://github.com/huggingface/datasets/pull/3846 | 1,161,810,226 | PR_kwDODunzps40D-uh | 3,846 | Update faiss device docstring | [] | closed | false | null | 1 | 2022-03-07T19:06:59Z | 2022-03-07T19:21:23Z | 2022-03-07T19:21:22Z | null | Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3846/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3846",
"merged_at": "2022-03-07T19:21:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3846"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | [] | closed | false | null | 0 | 2021-04-09T12:56:19Z | 2021-04-12T13:32:17Z | 2021-04-12T13:32:16Z | null | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4767/comments | https://api.github.com/repos/huggingface/datasets/issues/4767/events | https://github.com/huggingface/datasets/pull/4767 | 1,321,843,538 | PR_kwDODunzps48TCpI | 4,767 | Add 2.4.0 version added to docstrings | [] | closed | false | null | 1 | 2022-07-29T07:01:56Z | 2022-07-29T11:16:49Z | 2022-07-29T11:03:58Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4767/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4767",
"merged_at": "2022-07-29T11:03:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4767"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6063/comments | https://api.github.com/repos/huggingface/datasets/issues/6063/events | https://github.com/huggingface/datasets/pull/6063 | 1,818,679,485 | PR_kwDODunzps5WPtxi | 6,063 | Release: 2.14.0 | [] | closed | false | null | 4 | 2023-07-24T15:41:19Z | 2023-07-24T16:05:16Z | 2023-07-24T15:47:51Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6063/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6063",
"merged_at": "2023-07-24T15:47:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6063"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1863/comments | https://api.github.com/repos/huggingface/datasets/issues/1863/events | https://github.com/huggingface/datasets/issues/1863 | 806,171,311 | MDU6SXNzdWU4MDYxNzEzMTE= | 1,863 | Add WikiCREM | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 2 | 2021-02-11T08:16:00Z | 2021-03-07T07:27:13Z | null | null | ## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **Motivation:** Coreference resolution, common sense reasoning
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1863/timeline | null | null | null | null | false | [
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5673/comments | https://api.github.com/repos/huggingface/datasets/issues/5673/events | https://github.com/huggingface/datasets/pull/5673 | 1,641,066,352 | PR_kwDODunzps5M6wc3 | 5,673 | Pass down storage options | [] | closed | false | null | 5 | 2023-03-26T20:09:37Z | 2023-03-28T15:03:38Z | 2023-03-28T14:54:17Z | null | Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by allowing users to pass down `storage_options` all the way from `datasets.load_dataset` to support implementation-specific credentials
Supports something like the following to provide credentials explicitly instead of relying on boto's methods of locating them
```
load_dataset(..., data_files=["s3://..."], storage_options={"profile": "..."})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5673/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5673.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5673",
"merged_at": "2023-03-28T14:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5673.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5673"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\... |
https://api.github.com/repos/huggingface/datasets/issues/5033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5033/comments | https://api.github.com/repos/huggingface/datasets/issues/5033/events | https://github.com/huggingface/datasets/pull/5033 | 1,388,842,236 | PR_kwDODunzps4_wGSE | 5,033 | Remove redundant code from some dataset module factories | [] | closed | false | null | 1 | 2022-09-28T07:06:26Z | 2022-09-28T16:57:51Z | 2022-09-28T16:55:12Z | null | This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5033/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"merged_at": "2022-09-28T16:55:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1467/comments | https://api.github.com/repos/huggingface/datasets/issues/1467/events | https://github.com/huggingface/datasets/pull/1467 | 761,557,290 | MDExOlB1bGxSZXF1ZXN0NTM2MjA3NDcx | 1,467 | adding snow_simplified_japanese_corpus | [] | closed | false | null | 2 | 2020-12-10T19:45:03Z | 2020-12-17T13:22:48Z | 2020-12-17T11:25:34Z | null | Adding simplified Japanese corpus "SNOW T15" and "SNOW T23".
They contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation.
- http://www.jnlp.org/SNOW/T15
- http://www.jnlp.org/SNOW/T23 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1467/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1467",
"merged_at": "2020-12-17T11:25:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1467"
} | true | [
"merging since the CI is fixed on master",
"Thank you for the updates and merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/945/comments | https://api.github.com/repos/huggingface/datasets/issues/945/events | https://github.com/huggingface/datasets/pull/945 | 754,273,920 | MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1 | 945 | Adding Babi dataset - English version | [] | closed | false | null | 1 | 2020-12-01T10:35:36Z | 2020-12-04T15:43:05Z | 2020-12-04T15:42:54Z | null | Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/945/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/945.diff",
"html_url": "https://github.com/huggingface/datasets/pull/945",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/945.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/945"
} | true | [
"Replaced by #1126"
] |
https://api.github.com/repos/huggingface/datasets/issues/211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/211/comments | https://api.github.com/repos/huggingface/datasets/issues/211/events | https://github.com/huggingface/datasets/issues/211 | 626,565,994 | MDU6SXNzdWU2MjY1NjU5OTQ= | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 7 | 2020-05-28T14:38:14Z | 2020-07-23T10:15:16Z | 2020-07-23T10:15:16Z | null | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error.
On the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works:
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, remove_columns=["entity_pages"], load_from_cache_file=False)
```
. Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be?
**Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/211/timeline | null | completed | null | null | false | [
"Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's... |
https://api.github.com/repos/huggingface/datasets/issues/1208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1208/comments | https://api.github.com/repos/huggingface/datasets/issues/1208/events | https://github.com/huggingface/datasets/pull/1208 | 757,961,368 | MDExOlB1bGxSZXF1ZXN0NTMzMjIyMzQ4 | 1,208 | Add HKCanCor | [] | closed | false | null | 0 | 2020-12-06T16:14:43Z | 2020-12-06T20:23:17Z | 2020-12-06T20:21:54Z | null | (Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1208/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1208",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1208"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2749/comments | https://api.github.com/repos/huggingface/datasets/issues/2749/events | https://github.com/huggingface/datasets/issues/2749 | 958,968,748 | MDU6SXNzdWU5NTg5Njg3NDg= | 2,749 | Raise a proper exception when trying to stream a dataset that requires to manually download files | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-08-03T10:26:27Z | 2021-08-09T08:53:35Z | 2021-08-04T11:36:30Z | null | ## Describe the bug
At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reclor", streaming=True)
```
## Expected results
Ideally: raise a specific exception, something like `ManualDownloadError`.
Or at least give the reason in the message, as when we load in normal mode:
```python
from datasets import load_dataset
dataset = load_dataset("reclor")
```
```
AssertionError: The dataset reclor with config default requires manual data.
Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google
form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name')
.
Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>')
```
## Actual results
```
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## Environment info
- `datasets` version: 1.11.0
- Platform: macOS-11.5-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2749/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2749/timeline | null | completed | null | null | false | [
"Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requir... |
https://api.github.com/repos/huggingface/datasets/issues/5902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5902/comments | https://api.github.com/repos/huggingface/datasets/issues/5902/events | https://github.com/huggingface/datasets/pull/5902 | 1,727,342,194 | PR_kwDODunzps5RbPS9 | 5,902 | Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository | [] | closed | false | null | 13 | 2023-05-26T10:25:01Z | 2023-07-25T13:50:06Z | 2023-07-25T13:38:33Z | null | ## What's in this PR?
This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required.
Besides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation.
Finally, I've re-run everything in Google Colab, and every cell was successfully executed!
## What was done on top of the original PR?
Based on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface/notebooks` following @stevhliu suggestions. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5902/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5902",
"merged_at": "2023-07-25T13:38:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5902"
} | true | [
"Random fact: previous run was showing that the Hub was hosting 13336 datasets, while the most recent run shows 36662 👀🎉",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! \r\n\r\nHowever, I think we should stop linking this notebook and use the notebook version of the ... |
https://api.github.com/repos/huggingface/datasets/issues/3644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3644/comments | https://api.github.com/repos/huggingface/datasets/issues/3644/events | https://github.com/huggingface/datasets/issues/3644 | 1,116,519,670 | I_kwDODunzps5CjLz2 | 3,644 | Add a GROUP BY operator | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 9 | 2022-01-27T16:57:54Z | 2023-03-14T14:45:59Z | null | null | **Is your feature request related to a problem? Please describe.**
Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example:
```python
# features:
# {
# "example_id": datasets.Value("int32"),
# "text": datasets.Value("string")
# }
ds = datasets.Dataset()
def split(examples):
sentences = [text.split(".") for text in examples["text"]]
return {
"example_id": [
example_id
for example_id, sents in zip(examples["example_id"], sentences)
for _ in sents
],
"sentence": [sent for sents in sentences for sent in sents],
"sentence_id": [i for sents in sentences for i in range(len(sents))],
}
split_ds = ds.map(split, batched=True)
def process(examples):
outputs = some_neural_network_that_works_on_sentences(examples["sentence"])
return {"outputs": outputs}
split_ds = split_ds.map(process, batched=True)
```
I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together.
**Describe the solution you'd like**
Ideally, it would look something like this:
```python
def join(examples):
order = np.argsort(examples["sentence_id"])
text = ".".join(examples["text"][i] for i in order)
outputs = [examples["outputs"][i] for i in order]
return {"text": text, "outputs": outputs}
ds = split_ds.group_by("example_id", join)
```
**Describe alternatives you've considered**
Right now, we can do this:
```python
def merge(example):
meeting_id = example["example_id"]
parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no")
return {"outputs": list(parts["outputs"])}
ds = ds.map(merge)
```
Of course, we could process the dataset like this:
```python
def process(example):
outputs = some_neural_network_that_works_on_sentences(example["text"].split("."))
return {"outputs": outputs}
ds = ds.map(process, batched=True)
```
However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example.
I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3644/timeline | null | null | null | null | false | [
"Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI ju... |
https://api.github.com/repos/huggingface/datasets/issues/1948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1948/comments | https://api.github.com/repos/huggingface/datasets/issues/1948/events | https://github.com/huggingface/datasets/issues/1948 | 816,689,329 | MDU6SXNzdWU4MTY2ODkzMjk= | 1,948 | dataset loading logger level | [] | closed | false | null | 3 | 2021-02-25T18:33:37Z | 2023-07-12T17:19:30Z | 2023-07-12T17:19:30Z | null | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow
```
why are those WARNINGs? Should be INFO, no?
warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.
Thank you.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1948/timeline | null | completed | null | null | false | [
"These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.\r\nThey are warnings since we want to make sure the users know that it's not recomputed.",
"Thank you for explaining the intention, @lhoestq \r\n\r\n1. Could it be the... |
https://api.github.com/repos/huggingface/datasets/issues/2245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2245/comments | https://api.github.com/repos/huggingface/datasets/issues/2245/events | https://github.com/huggingface/datasets/pull/2245 | 863,191,655 | MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3 | 2,245 | Add `key` type and duplicates verification with hashing | [] | closed | false | null | 17 | 2021-04-20T20:03:19Z | 2021-05-10T18:04:37Z | 2021-05-10T17:31:22Z | null | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2245/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2245.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2245",
"merged_at": "2021-05-10T17:31:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2245.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2245"
} | true | [
"@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, po... |
https://api.github.com/repos/huggingface/datasets/issues/1370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1370/comments | https://api.github.com/repos/huggingface/datasets/issues/1370/events | https://github.com/huggingface/datasets/pull/1370 | 760,264,132 | MDExOlB1bGxSZXF1ZXN0NTM1MTI1MTc3 | 1,370 | Add OPUS PHP Dataset | [] | closed | false | null | 0 | 2020-12-09T11:53:30Z | 2020-12-10T15:37:25Z | 2020-12-10T15:37:24Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1370/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1370",
"merged_at": "2020-12-10T15:37:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1370"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/2069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2069/comments | https://api.github.com/repos/huggingface/datasets/issues/2069/events | https://github.com/huggingface/datasets/pull/2069 | 833,768,926 | MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw | 2,069 | Add and fix docstring for NamedSplit | [] | closed | false | null | 1 | 2021-03-17T13:19:28Z | 2021-03-18T10:27:40Z | 2021-03-18T10:27:40Z | null | Add and fix docstring for `NamedSplit`, which was missing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2069/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"merged_at": "2021-03-18T10:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069"
} | true | [
"Maybe we should add some other split classes?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2276/comments | https://api.github.com/repos/huggingface/datasets/issues/2276/events | https://github.com/huggingface/datasets/issues/2276 | 870,010,511 | MDU6SXNzdWU4NzAwMTA1MTE= | 2,276 | concatenate_datasets loads all the data into memory | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2021-04-28T14:27:21Z | 2021-05-03T08:41:55Z | 2021-05-03T08:41:55Z | null | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.

## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_from_disk
test_sampled_pro = load_from_disk("test_sampled_pro")
val_sampled_pro = load_from_disk("val_sampled_pro")
big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])
# Loaded to memory
big_set.save_to_disk("big_set")
# Loaded to memory
big_set = concatenate_datasets([big_set, val_sampled_pro])
```
## Expected results
The data should be loaded into memory in batches and then saved directly to disk.
## Actual results
The entire data set is loaded into the memory and then saved to the hard disk.
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.1
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2276/timeline | null | completed | null | null | false | [
"Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceba... |
https://api.github.com/repos/huggingface/datasets/issues/4523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4523/comments | https://api.github.com/repos/huggingface/datasets/issues/4523/events | https://github.com/huggingface/datasets/pull/4523 | 1,275,002,639 | PR_kwDODunzps452hgh | 4,523 | Update download url and improve card of `cats_vs_dogs` dataset | [] | closed | false | null | 1 | 2022-06-17T12:59:44Z | 2022-06-21T14:23:26Z | 2022-06-21T14:13:08Z | null | Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4523/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4523/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4523.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4523",
"merged_at": "2022-06-21T14:13:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4523.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4523"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2022-05-31T13:40:16Z | 2022-06-03T16:25:25Z | 2022-06-03T16:25:25Z | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | completed | null | null | false | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2262/comments | https://api.github.com/repos/huggingface/datasets/issues/2262/events | https://github.com/huggingface/datasets/issues/2262 | 867,325,351 | MDU6SXNzdWU4NjczMjUzNTE= | 2,262 | NewsPH NLI dataset script fails to access test data. | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2021-04-26T06:44:41Z | 2021-04-29T09:32:03Z | 2021-04-29T09:30:20Z | null | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If you download it according to the script above, you can see that train and test receive the same data as shown below.
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
```
In local, I modified the code of the source as below and got the correct result.
```python
71 test_path = os.path.join(download_path, "test.csv")
```
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',
'label': 1,
'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}
```
I don't have experience with open source pull requests, so I suggest that you reflect them in the source.
Thank you for reading :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2262/timeline | null | completed | null | null | false | [
"Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/2217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2217/comments | https://api.github.com/repos/huggingface/datasets/issues/2217/events | https://github.com/huggingface/datasets/pull/2217 | 857,011,314 | MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz | 2,217 | Revert breaking change in cache_files property | [] | closed | false | null | 0 | 2021-04-13T14:20:04Z | 2021-04-14T14:24:24Z | 2021-04-14T14:24:23Z | null | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting the format back to a list of dicts:
```python
[{"filename": "path/to/file.arrow"}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2217/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"merged_at": "2021-04-14T14:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3993/comments | https://api.github.com/repos/huggingface/datasets/issues/3993/events | https://github.com/huggingface/datasets/issues/3993 | 1,178,201,495 | I_kwDODunzps5GOe2X | 3,993 | Streaming dataset + interleave + DataLoader hangs with multiple workers | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 5 | 2022-03-23T14:27:29Z | 2023-02-28T14:14:24Z | null | null | ## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True)
it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True)
de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True)
multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset])
multilingual_dataset = multilingual_dataset.with_format('torch')
next(iter(multilingual_dataset)) # works fairly fast
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4)
for batch in dataloader:
print(len(batch)) # prints nothing after 30 min of waiting
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0)
for batch in dataloader:
print(len(batch)) # prints right away
```
## Expected results
It should be able to iterate the dataset with multiple workers.
## Actual results
Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- `pytorch` version: 1.10.0+cu113
- Python version: 3.7
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3993/timeline | null | null | null | null | false | [
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)... |
https://api.github.com/repos/huggingface/datasets/issues/2520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2520/comments | https://api.github.com/repos/huggingface/datasets/issues/2520/events | https://github.com/huggingface/datasets/issues/2520 | 925,015,004 | MDU6SXNzdWU5MjUwMTUwMDQ= | 2,520 | Datasets with tricky task templates | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | 1 | 2021-06-18T15:33:57Z | 2023-07-20T13:20:32Z | 2023-07-20T13:20:32Z | null | I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for.
## Text classification
* [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized.
* [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2520/timeline | null | completed | null | null | false | [
"The `task_templates` API is deprecated in favor of the `train-eval-index` YAML field, so I'm closing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/3879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3879/comments | https://api.github.com/repos/huggingface/datasets/issues/3879/events | https://github.com/huggingface/datasets/pull/3879 | 1,164,311,612 | PR_kwDODunzps40MP7f | 3,879 | SQuAD v2 metric: create README.md | [] | closed | false | null | 1 | 2022-03-09T18:47:56Z | 2022-03-10T16:48:59Z | 2022-03-10T16:48:59Z | null | Proposing SQuAD v2 metric card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3879/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3879.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3879",
"merged_at": "2022-03-10T16:48:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3879.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3879"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3879). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5367/comments | https://api.github.com/repos/huggingface/datasets/issues/5367/events | https://github.com/huggingface/datasets/pull/5367 | 1,499,174,749 | PR_kwDODunzps5FlevK | 5,367 | Fix remove columns from lazy dict | [] | closed | false | null | 1 | 2022-12-15T22:04:12Z | 2022-12-15T22:27:53Z | 2022-12-15T22:24:50Z | null | This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
```python
from datasets import *
ds = Dataset.from_dict({"a": range(5)})
def f(x):
x["b"] = x["a"]
return x
ds = ds.map(f, remove_columns=["a"])
assert ds.column_names == ["b"]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5367/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"merged_at": "2022-12-15T22:24:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5367"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3609/comments | https://api.github.com/repos/huggingface/datasets/issues/3609/events | https://github.com/huggingface/datasets/pull/3609 | 1,109,579,112 | PR_kwDODunzps4xVrsG | 3,609 | Fixes to pubmed dataset download function | [] | closed | false | null | 3 | 2022-01-20T17:31:35Z | 2022-03-03T16:18:52Z | 2022-03-03T14:23:35Z | null | Pubmed has updated its settings for 2022 and thus existing download script does not work. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3609/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3609.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3609",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3609.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3609"
} | true | [
"Hi ! I think we can simply add a new configuration for the 2022 data instead of replacing them.\r\nYou can add the new configuration here:\r\n```python\r\n BUILDER_CONFIGS = [\r\n datasets.BuilderConfig(name=\"2021\", description=\"The 2021 annual record\", version=datasets.Version(\"1.0.0\")),\r\n ... |
https://api.github.com/repos/huggingface/datasets/issues/1791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1791/comments | https://api.github.com/repos/huggingface/datasets/issues/1791/events | https://github.com/huggingface/datasets/pull/1791 | 796,924,519 | MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3 | 1,791 | Small fix with corrected logging of train vectors | [] | closed | false | null | 0 | 2021-01-29T14:26:06Z | 2021-01-29T18:51:10Z | 2021-01-29T17:05:07Z | null | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1791/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"merged_at": "2021-01-29T17:05:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1791"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/50 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/50/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/50/comments | https://api.github.com/repos/huggingface/datasets/issues/50/events | https://github.com/huggingface/datasets/pull/50 | 612,583,126 | MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0 | 50 | [Tests] test only for fast test as a default | [] | closed | false | null | 1 | 2020-05-05T12:59:22Z | 2020-05-05T13:02:18Z | 2020-05-05T13:02:16Z | null | Test only for one config on circle ci to speed up testing. Add all config test as a slow test.
@mariamabarham @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/50/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/50/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/50.diff",
"html_url": "https://github.com/huggingface/datasets/pull/50",
"merged_at": "2020-05-05T13:02:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/50.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/50"
} | true | [
"Test failure is not related to change in test file.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/887/comments | https://api.github.com/repos/huggingface/datasets/issues/887/events | https://github.com/huggingface/datasets/issues/887 | 750,868,831 | MDU6SXNzdWU3NTA4Njg4MzE= | 887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 14 | 2020-11-25T14:32:21Z | 2021-09-09T17:03:40Z | null | null | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/887/timeline | null | null | null | null | false | [
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since ... |
https://api.github.com/repos/huggingface/datasets/issues/3729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3729/comments | https://api.github.com/repos/huggingface/datasets/issues/3729/events | https://github.com/huggingface/datasets/issues/3729 | 1,139,398,442 | I_kwDODunzps5D6dcq | 3,729 | Wrong number of examples when loading a text dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-02-16T01:13:31Z | 2022-03-15T16:16:09Z | 2022-03-15T16:16:09Z | null | ## Describe the bug
when I use load_dataset to read a txt file I find that the number of the samples is incorrect
## Steps to reproduce the bug
```
fr = open('train.txt','r',encoding='utf-8').readlines()
print(len(fr)) # 1199637
datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming=False)
print(len(datasets['train'])) # 1199649
```
I also use command line operation to verify it
```
$ wc -l train.txt
1199637 train.txt
```
## Expected results
please fix that issue
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.3
- Platform:windows&linux
- Python version:3.7
- PyArrow version:6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3729/timeline | null | completed | null | null | false | [
"Hi @kg-nlp, thanks for reporting.\r\n\r\nThat is weird... I guess we would need some sample data file where this behavior appears to reproduce the bug for further investigation... ",
"ok, I found the reason why that two results are not same.\r\nthere is /u2029 in the text, the datasets will split sentence accord... |
https://api.github.com/repos/huggingface/datasets/issues/4806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4806/comments | https://api.github.com/repos/huggingface/datasets/issues/4806/events | https://github.com/huggingface/datasets/pull/4806 | 1,332,664,038 | PR_kwDODunzps482yiS | 4,806 | Fix opus_gnome dataset card | [] | closed | false | null | 20 | 2022-08-09T03:40:15Z | 2022-08-09T12:06:46Z | 2022-08-09T11:52:04Z | null | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4806/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4806",
"merged_at": "2022-08-09T11:52:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4806"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ",
"@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.",
"Both are identical. And you can push additi... |
https://api.github.com/repos/huggingface/datasets/issues/5409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5409/comments | https://api.github.com/repos/huggingface/datasets/issues/5409/events | https://github.com/huggingface/datasets/pull/5409 | 1,520,374,219 | PR_kwDODunzps5Gs3nL | 5,409 | Fix deprecation warning when use_auth_token passed to download_and_prepare | [] | closed | false | null | 2 | 2023-01-05T09:10:58Z | 2023-01-06T11:06:16Z | 2023-01-06T10:59:13Z | null | The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5409/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"merged_at": "2023-01-06T10:59:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5409"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1516/comments | https://api.github.com/repos/huggingface/datasets/issues/1516/events | https://github.com/huggingface/datasets/pull/1516 | 764,032,327 | MDExOlB1bGxSZXF1ZXN0NTM4MjkzOTMw | 1,516 | adding wrbsc | [] | closed | false | null | 2 | 2020-12-12T16:38:40Z | 2020-12-18T09:41:33Z | 2020-12-18T09:41:33Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1516/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1516",
"merged_at": "2020-12-18T09:41:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1516"
} | true | [
"@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated. ",
"merging since the CI is fixed on master"
] | |
https://api.github.com/repos/huggingface/datasets/issues/2291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2291/comments | https://api.github.com/repos/huggingface/datasets/issues/2291/events | https://github.com/huggingface/datasets/pull/2291 | 871,216,757 | MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5 | 2,291 | Don't copy recordbatches in memory during a table deepcopy | [] | closed | false | null | 0 | 2021-04-29T16:26:05Z | 2021-04-29T16:34:35Z | 2021-04-29T16:34:34Z | null | Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied).
The issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA.
Thanks @samsontmr , @TaskManager91 and @mariosasko for the help
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2291/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2291",
"merged_at": "2021-04-29T16:34:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2291"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2660/comments | https://api.github.com/repos/huggingface/datasets/issues/2660/events | https://github.com/huggingface/datasets/pull/2660 | 946,316,180 | MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0 | 2,660 | Move checks from _map_single to map | [] | closed | false | null | 3 | 2021-07-16T13:53:33Z | 2021-09-06T14:12:23Z | 2021-09-06T14:12:23Z | null | The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2660/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2660",
"merged_at": "2021-09-06T14:12:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2660"
} | true | [
"@lhoestq This one has been open for a while. Could you please take a look?",
"@lhoestq Ready for the final review!",
"I forgot to update the signature of `DatasetDict.map`, so did that now."
] |
https://api.github.com/repos/huggingface/datasets/issues/2048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2048/comments | https://api.github.com/repos/huggingface/datasets/issues/2048/events | https://github.com/huggingface/datasets/issues/2048 | 830,953,431 | MDU6SXNzdWU4MzA5NTM0MzE= | 2,048 | github is not always available - probably need a back up | [] | closed | false | null | 0 | 2021-03-13T18:03:32Z | 2022-04-01T15:27:10Z | 2022-04-01T15:27:10Z | null | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2048/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2756/comments | https://api.github.com/repos/huggingface/datasets/issues/2756/events | https://github.com/huggingface/datasets/pull/2756 | 959,255,646 | MDExOlB1bGxSZXF1ZXN0NzAyMzk4Mjk1 | 2,756 | Fix metadata JSON for ubuntu_dialogs_corpus dataset | [] | closed | false | null | 0 | 2021-08-03T15:48:59Z | 2021-08-04T09:43:25Z | 2021-08-04T09:43:25Z | null | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2756/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2756.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2756",
"merged_at": "2021-08-04T09:43:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2756.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2756"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/258/comments | https://api.github.com/repos/huggingface/datasets/issues/258/events | https://github.com/huggingface/datasets/issues/258 | 635,859,525 | MDU6SXNzdWU2MzU4NTk1MjU= | 258 | Why is dataset after tokenization far more larger than the orginal one ? | [] | closed | false | null | 4 | 2020-06-10T01:27:07Z | 2020-06-10T12:46:34Z | 2020-06-10T12:46:34Z | null | I tokenize wiki dataset by `map` and cache the results.
```
def tokenize_tfm(example):
example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))
return example
wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']
wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow")
```
and when I see their size
```
ls -l --block-size=M
17460M wikipedia-train.arrow
47511M tokenized_wiki.arrow
```
The tokenized one is over 2x size of original one.
Is there something I did wrong ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/258/timeline | null | completed | null | null | false | [
"Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of... |
https://api.github.com/repos/huggingface/datasets/issues/4211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4211/comments | https://api.github.com/repos/huggingface/datasets/issues/4211/events | https://github.com/huggingface/datasets/issues/4211 | 1,214,361,837 | I_kwDODunzps5IYbDt | 4,211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 10 | 2022-04-25T11:22:54Z | 2023-04-06T19:25:50Z | 2022-05-20T15:15:30Z | null | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli).
In short:
I have 3 feature mapping
```python
Tri_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
Ent_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
}
)
Con_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
}
)
```
Then I create different datasets
```python
dataset_splits = {}
for split in df["split"].unique():
print(split)
df_split = df.loc[df["split"] == split].copy()
if split in Tri_dataset:
df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds = Dataset.from_pandas(df_split, features=Tri_features)
elif split in Ent_bin_dataset:
df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
ds = Dataset.from_pandas(df_split, features=Ent_features)
elif split in Con_bin_dataset:
df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
ds = Dataset.from_pandas(df_split, features=Con_features)
else:
print("ERROR:", split)
dataset_splits[split] = ds
datasets = DatasetDict(dataset_splits)
```
I then push to hub
```python
datasets.push_to_hub("pietrolesci/robust_nli", token="<token>")
```
Finally, I load it from the hub
```python
datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli")
```
And I get that
```python
datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features
```
since
```python
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"])
```
gets remapped to
```python
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"])
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4211/timeline | null | completed | null | null | false | [
"Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we u... |
https://api.github.com/repos/huggingface/datasets/issues/1778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1778/comments | https://api.github.com/repos/huggingface/datasets/issues/1778/events | https://github.com/huggingface/datasets/pull/1778 | 793,474,507 | MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1 | 1,778 | Narrative QA Manual | [] | closed | false | null | 6 | 2021-01-25T15:22:31Z | 2021-01-29T09:35:14Z | 2021-01-29T09:34:51Z | null | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1778/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"merged_at": "2021-01-29T09:34:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1778"
} | true | [
"@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364",
"Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ",
"I've copied the same template as NarrativeQA now. Please le... |
https://api.github.com/repos/huggingface/datasets/issues/1789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1789/comments | https://api.github.com/repos/huggingface/datasets/issues/1789/events | https://github.com/huggingface/datasets/pull/1789 | 796,229,721 | MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2 | 1,789 | [BUG FIX] typo in the import path for metrics | [] | closed | false | null | 0 | 2021-01-28T18:01:37Z | 2021-01-28T18:13:56Z | 2021-01-28T18:13:56Z | null | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1789/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4123/comments | https://api.github.com/repos/huggingface/datasets/issues/4123/events | https://github.com/huggingface/datasets/issues/4123 | 1,196,367,512 | I_kwDODunzps5HTx6Y | 4,123 | Building C4 takes forever | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-04-07T17:41:30Z | 2023-06-26T22:01:29Z | 2023-06-26T22:01:29Z | null | ## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load("c4", "en")
```
## Expected results
I would like to be able to download pre-split data.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4123/timeline | null | completed | null | null | false | [
"Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches th... |
https://api.github.com/repos/huggingface/datasets/issues/41 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/41/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/41/comments | https://api.github.com/repos/huggingface/datasets/issues/41/events | https://github.com/huggingface/datasets/pull/41 | 611,739,219 | MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy | 41 | [Load module] allow kwargs into load module | [] | closed | false | null | 0 | 2020-05-04T09:42:11Z | 2020-05-04T19:39:07Z | 2020-05-04T19:39:06Z | null | Currenly it is not possible to force a re-download of the dataset script.
This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/41/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/41/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/41.diff",
"html_url": "https://github.com/huggingface/datasets/pull/41",
"merged_at": "2020-05-04T19:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/41.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/41"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3051/comments | https://api.github.com/repos/huggingface/datasets/issues/3051/events | https://github.com/huggingface/datasets/issues/3051 | 1,021,852,234 | I_kwDODunzps486DpK | 3,051 | Non-Matching Checksum Error with crd3 dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-10T01:32:43Z | 2022-03-15T15:54:26Z | 2022-03-15T15:54:26Z | null | ## Describe the bug
When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown.
## Steps to reproduce the bug
```python
dataset = load_dataset('crd3', split='train')
```
## Expected results
I expect no error to be thrown.
## Actual results
A non-matching checksum error is thrown.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip']
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3051/timeline | null | completed | null | null | false | [
"I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/... |
https://api.github.com/repos/huggingface/datasets/issues/6005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6005/comments | https://api.github.com/repos/huggingface/datasets/issues/6005/events | https://github.com/huggingface/datasets/pull/6005 | 1,788,103,576 | PR_kwDODunzps5UoJ91 | 6,005 | Drop Python 3.7 support | [] | closed | false | null | 7 | 2023-07-04T15:02:37Z | 2023-07-06T15:32:41Z | 2023-07-06T15:22:43Z | null | `hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).
(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6005/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6005.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6005",
"merged_at": "2023-07-06T15:22:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6005.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6005"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.