id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
988,276,859
2,870
Fix three typos in two files for documentation
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
closed
https://github.com/huggingface/datasets/pull/2870
2021-09-04T11:49:43
2021-09-06T08:21:21
2021-09-06T08:19:35
{ "login": "leny-mi", "id": 25124853, "type": "User" }
[]
true
[]
987,676,420
2,869
TypeError: 'NoneType' object is not callable
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: - Python version: 3.7 - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2869
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
{ "login": "Chenfei-Kang", "id": 40911446, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
987,139,146
2,868
Add Common Objects in 3D (CO3D)
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)* - **Motivation:** *excerpt from above blog post:* > As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences. > > Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model. > Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2868
2021-09-02T20:36:12
2024-01-17T12:03:59
null
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
986,971,224
2,867
Add CaSiNo dataset
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
closed
https://github.com/huggingface/datasets/pull/2867
2021-09-02T17:06:23
2021-09-16T15:12:54
2021-09-16T09:23:44
{ "login": "kushalchawla", "id": 8416863, "type": "User" }
[]
true
[]
986,706,676
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9... Traceback (most recent call last): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split for key, record in utils.tqdm( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__ for obj in iterable: File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples with derived_file.open(encoding="utf-8") as f: File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' ``` ```python >>> import datasets as ds >>> b = ds.load_dataset('counter', split="train", streaming=True) Using custom data configuration default >>> list(b) [] ``` ## Expected results An exception should be raised in streaming mode ## Actual results No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty. ## Environment info - `datasets` version: 1.11.1.dev0 - Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2866
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
986,460,698
2,865
Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
closed
https://github.com/huggingface/datasets/pull/2865
2021-09-02T09:42:24
2021-09-10T11:50:06
2021-09-10T11:50:06
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
986,159,438
2,864
Fix data URL in ToTTo dataset
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
closed
https://github.com/huggingface/datasets/pull/2864
2021-09-02T05:25:08
2021-09-02T06:47:40
2021-09-02T06:47:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
986,156,755
2,863
Update dataset URL
null
closed
https://github.com/huggingface/datasets/pull/2863
2021-09-02T05:22:18
2021-09-02T08:10:50
2021-09-02T08:10:50
{ "login": "mrm8488", "id": 3653789, "type": "User" }
[]
true
[]
985,081,871
2,861
fix: 🐛 be more specific when catching exceptions
The same specific exception is catched in other parts of the same function.
closed
https://github.com/huggingface/datasets/pull/2861
2021-09-01T12:18:12
2021-09-02T09:53:36
2021-09-02T09:52:03
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
985,013,339
2,860
Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
closed
https://github.com/huggingface/datasets/issues/2860
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
{ "login": "mrm8488", "id": 3653789, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
984,324,500
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)
closed
https://github.com/huggingface/datasets/issues/2859
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
984,145,568
2,858
Fix s3fs version in CI
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
closed
https://github.com/huggingface/datasets/pull/2858
2021-08-31T18:05:43
2021-09-06T13:33:35
2021-08-31T21:29:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
984,093,938
2,857
Update: Openwebtext - update size
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) Close #2839, close #726.
closed
https://github.com/huggingface/datasets/pull/2857
2021-08-31T17:11:03
2022-02-15T10:38:03
2021-09-07T09:44:32
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,876,734
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See https://github.com/huggingface/datasets/pull/2843 for the original discussion.
closed
https://github.com/huggingface/datasets/pull/2856
2021-08-31T13:40:07
2021-08-31T14:22:12
2021-08-31T14:22:12
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
983,858,229
2,855
Fix windows CI CondaError
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
closed
https://github.com/huggingface/datasets/pull/2855
2021-08-31T13:22:02
2021-08-31T13:35:34
2021-08-31T13:35:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,726,084
2,854
Fix caching when moving script
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code. Using the full path of the python script for the location of the code makes the hash change if a script like `run_mlm.py` is moved. I changed this by simply using the base name of the script instead of the full path. Note that this change also affects the hash of the code used from imported modules, but I think it's fine. Indeed it hashes the code of the imported modules anyway, so the location of the python files of the imported modules doesn't matter when computing the hash. Close https://github.com/huggingface/datasets/issues/2825
closed
https://github.com/huggingface/datasets/pull/2854
2021-08-31T10:58:35
2021-08-31T13:13:36
2021-08-31T13:13:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,692,026
2,853
Add AMI dataset
This is an initial commit for AMI dataset
closed
https://github.com/huggingface/datasets/pull/2853
2021-08-31T10:19:01
2021-09-29T09:19:19
2021-09-29T09:19:19
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
983,609,352
2,852
Fix: linnaeus - fix url
The url was causing a `ConnectionError` because of the "/" at the end Close https://github.com/huggingface/datasets/issues/2821
closed
https://github.com/huggingface/datasets/pull/2852
2021-08-31T08:51:13
2021-08-31T13:12:10
2021-08-31T13:12:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
982,789,593
2,851
Update `column_names` showed as `:func:` in exploring.st
Hi, One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.
closed
https://github.com/huggingface/datasets/pull/2851
2021-08-30T13:21:46
2021-09-01T08:42:11
2021-08-31T14:45:46
{ "login": "ClementRomac", "id": 8899812, "type": "User" }
[]
true
[]
982,654,644
2,850
Wound segmentation datasets
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2850
2021-08-30T10:44:32
2021-12-08T12:02:00
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
982,631,420
2,849
Add Open Catalyst Project Dataset
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2849
2021-08-30T10:14:39
2021-08-30T10:14:39
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
981,953,908
2,848
Update README.md
Changed 'Tain' to 'Train'.
closed
https://github.com/huggingface/datasets/pull/2848
2021-08-28T23:58:26
2021-09-07T09:40:32
2021-09-07T09:40:32
{ "login": "odellus", "id": 4686956, "type": "User" }
[]
true
[]
981,589,693
2,847
fix regex to accept negative timezone
fix #2846
closed
https://github.com/huggingface/datasets/pull/2847
2021-08-27T20:54:05
2021-09-13T20:39:50
2021-09-07T09:34:23
{ "login": "jadermcs", "id": 7156771, "type": "User" }
[]
true
[]
981,587,590
2,846
Negative timezone
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```python # Where the timestamp column has a tz of -03:00 datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files, 'test': test_files}, cache_dir="./cache_teste/") ``` ## Expected results The -03:00 is a valid tz so the regex should accept this without raising an error. ## Actual results As this regex disaproves a valid tz it raises the following error: ```python raise ValueError( f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp." f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]" f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp" ) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2846
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
{ "login": "jadermcs", "id": 7156771, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
981,487,861
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsir(builder.cache_dir): builder.download_and_prepare() ``` but the current way is a way less intuitive and much harder to remember than the proposed API, IMHO. One more way is to do: ``` _ = load_dataset(ds) ``` but it wastes resources loading the dataset when it's not needed. this has been discussed at https://huggingface.slack.com/archives/C01229B19EX/p1630021912025800 Thank you! @lhoestq
open
https://github.com/huggingface/datasets/issues/2845
2021-08-27T18:21:51
2021-08-27T18:24:05
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
981,382,806
2,844
Fix: wikicorpus - fix keys
As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`. I fixed that by taking into account the file index in the keys
closed
https://github.com/huggingface/datasets/pull/2844
2021-08-27T15:56:06
2021-09-06T14:07:28
2021-09-06T14:07:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
981,317,775
2,843
Fix extraction protocol inference from urls with params
Previously it was unable to infer the compression protocol for files at URLs like ``` https://foo.bar/train.json.gz?dl=1 ``` because of the query parameters. I fixed that, this should allow 10+ datasets to work in streaming mode: ``` "discovery", "emotion", "grail_qa", "guardian_authorship", "pragmeval", "simple_questions_v2", "versae/adobo", "w-nicole/childes_data", "w-nicole/childes_data_no_tags_", "w-nicole/childes_data_with_tags", "w-nicole/childes_data_with_tags_" ``` cc @severo
closed
https://github.com/huggingface/datasets/pull/2843
2021-08-27T14:40:57
2021-08-30T17:11:49
2021-08-30T13:12:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
980,725,899
2,842
always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k` So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it. The same in code: ``` # first run python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')" # now run immediately python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')" # the second command should fail, but it doesn't fail now. ``` Please let me know if I explained myself clearly. Thank you!
closed
https://github.com/huggingface/datasets/issues/2842
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
980,497,321
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
## Adding a Dataset - **Name:** GLUECoS - **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks - **Paper:** https://aclanthology.org/2020.acl-main.329/ - **Data:** https://github.com/microsoft/GLUECoS - **Motivation:** We currently only have [one other](https://huggingface.co/datasets/lince) dataset for code-switching Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2841
2021-08-26T17:47:39
2021-10-20T18:41:20
null
{ "login": "yjernite", "id": 10469459, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
980,489,074
2,840
How can I compute BLEU-4 score use `load_metric` ?
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4. If I want to compute BLEU-4 score, what can i do?
closed
https://github.com/huggingface/datasets/issues/2840
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
{ "login": "Doragd", "id": 26213546, "type": "User" }
[]
false
[]
980,271,715
2,839
OpenWebText: NonMatchingSplitsSizesError
## Describe the bug When downloading `openwebtext`, I'm getting: ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}] ``` I suspect that the file we download from has changed since the size doesn't look like to match with documentation `Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("openwebtext", download_mode="force_redownload") ``` ## Expected results Loading is successful ## Actual results Loading throws above error. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: linux (Redhat version 8.1) - Python version: 3.8 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2839
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
980,067,186
2,838
Add error_bad_chunk to the JSON loader
Add the `error_bad_chunk` parameter to the JSON loader. Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error. Additional note: In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in streaming mode) to get the JSON fields that the user may have forgotten to pass. Ex : for squad-like data, the user has to pass `field="data"` to tell the loader to get the list of examples from this field. TODO: update docs cc @lvwerra
open
https://github.com/huggingface/datasets/pull/2838
2021-08-26T10:07:32
2023-09-25T09:06:42
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
979,298,297
2,837
prepare_module issue when loading from read-only fs
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_dataset( python_loader, data_files={"train": train_path, "test": test_path} ) ``` where `python_loader` is a path to a file located in a readonly folder. ## Expected results This should work I think? ## Actual results ```python return load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module with FileLock(lock_path): File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__ self.acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire self._acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2837
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
979,230,142
2,836
Optimize Dataset.filter to only compute the indices to keep
Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space. This will be useful to process audio datasets for example cc @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/2836
2021-08-25T14:41:22
2021-09-14T14:51:53
2021-09-13T15:50:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
979,209,394
2,835
Update: timit_asr - make the dataset streamable
The TIMIT ASR dataset had two issues that was preventing it from being streamable: 1. it was missing a call to `open` before `pd.read_csv` 2. it was using `os.path.dirname` which is not supported for streaming I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.dirname` in dataset scripts to stream data You can now do ```python from datasets import load_dataset timit_asr = load_dataset("timit_asr", streaming=True) print(next(iter(timit_asr["train"]))) ``` prints: ```json {"file": "zip://data/TRAIN/DR4/MMDM0/SI681.WAV::https://data.deepai.org/timit.zip", "phonetic_detail": {"start": [0, 1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720], "utterance": ["h#", "w", "ix", "dcl", "s", "ah", "tcl", "ch", "ix", "n", "ae", "kcl", "t", "ix", "v", "r", "ix", "f", "y", "ux", "zh", "el", "bcl", "b", "iy", "y", "ux", "s", "f", "el", "h#"], "stop": [1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720, 39920]}, "sentence_type": "SI", "id": "SI681", "speaker_id": "MMDM0", "dialect_region": "DR4", "text": "Would such an act of refusal be useful?", "word_detail": { "start": [1960, 4000, 9400, 10680, 15880, 18297, 27080, 30120], "utterance": ["would", "such", "an", "act", "of", "refusal", "be", "useful"], "stop": [4000, 9400, 10680, 15880, 18297, 27080, 30120, 37720] }} ``` cc @patrickvonplaten @vrindaprabhu
closed
https://github.com/huggingface/datasets/pull/2835
2021-08-25T14:22:49
2021-09-07T13:15:47
2021-09-07T13:15:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
978,309,749
2,834
Fix IndexError by ignoring empty RecordBatch
We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables Close #2833 cc @SaulLu
closed
https://github.com/huggingface/datasets/pull/2834
2021-08-24T17:06:13
2021-08-24T17:21:18
2021-08-24T17:21:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
978,296,140
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.concat_tables([pa_table2, pa_table]) dataset = Dataset(ds_table) print([len(b) for b in dataset.data._batches]) # [0, 1] print(dataset.data._offsets) # [0 0 1] (should be [0, 1]) dataset[0] ``` raises ```python --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/table.py in _interpolation_search(arr, x) 90 else: 91 i, j = i, k ---> 92 raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") 93 94 IndexError: Invalid query '0' for size 1. ``` This can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets` cc @SaulLu
closed
https://github.com/huggingface/datasets/issues/2833
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
978,012,800
2,832
Logging levels not taken into account
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logger.debug("DEBUG" ``` ## Expected results I expect all logs to be output since I'm putting a `debug` level. ## Actual results Only the two first logs are output. ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.6 - PyArrow version: 5.0.0 ## To go further This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`. `transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86)
closed
https://github.com/huggingface/datasets/issues/2832
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
977,864,600
2,831
ArrowInvalid when mapping dataset with missing values
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv) [data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("csv", data_files=['data_small.csv']) datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id']) ``` ## Expected results No error ## Actual results ``` File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Invalid null value ``` ## Environment info - `datasets` version: 1.5.0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
open
https://github.com/huggingface/datasets/issues/2831
2021-08-24T08:50:42
2021-08-31T14:15:34
null
{ "login": "uniquefine", "id": 12694730, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
977,563,947
2,830
Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. Resolves #2508 --- Example Usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
closed
https://github.com/huggingface/datasets/pull/2830
2021-08-23T23:34:06
2022-03-01T16:29:44
2022-03-01T16:29:44
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
977,233,360
2,829
Optimize streaming from TAR archives
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2 ``` Instead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`. The regular `DownloadManager` already has it. Then we will have to update the json/txt/csv/etc. loaders to make them use `iter_archive` on TAR archives. That's also what Tensorflow Datasets is doing in this case. See this [dataset](https://github.com/tensorflow/datasets/blob/93895059c80a9e05805e8f32a2e310f66a23fc98/tensorflow_datasets/image_classification/flowers.py) for example. Therefore instead of doing ```python uncompressed = dl_manager.extract(tar_archive) filename = "books_large_p1.txt" with open(os.path.join(uncompressed, filename)) as f: for line in f: ... ``` we'll do ```python for filename, f in dl_manager.iter_archive(tar_archive): for line in f: ... ```
closed
https://github.com/huggingface/datasets/issues/2829
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
977,181,517
2,828
Add code-mixed Kannada Hope speech dataset
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
closed
https://github.com/huggingface/datasets/pull/2828
2021-08-23T15:55:09
2021-10-01T17:21:03
2021-10-01T17:21:03
{ "login": "adeepH", "id": 46108405, "type": "User" }
[]
true
[]
976,976,552
2,827
add a text classification dataset
null
closed
https://github.com/huggingface/datasets/pull/2827
2021-08-23T12:24:41
2021-08-23T15:51:18
2021-08-23T15:51:18
{ "login": "adeepH", "id": 46108405, "type": "User" }
[]
true
[]
976,974,254
2,826
Add a Text Classification dataset: KanHope
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages* - I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated. - The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval* ``` Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762... --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-114-4a9cdb519e4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 data = load_dataset('/content/bn') 9 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 850 ignore_verifications=ignore_verifications, 851 try_from_hf_gcs=try_from_hf_gcs, --> 852 use_auth_token=use_auth_token, 853 ) 854 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 614 if not downloaded_from_gcs: 615 self._download_and_prepare( --> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) 618 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 691 try: 692 # Prepare split will record examples associated to the split --> 693 self._prepare_split(split_generator, **prepare_split_kwargs) 694 except OSError as e: 695 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 1107 disable=bool(logging.get_verbosity() == logging.NOTSET), 1108 ): -> 1109 example = self.info.features.encode_example(record) 1110 writer.write(example, key) 1111 finally: /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example) 1015 """ 1016 example = cast_to_python_objects(example) -> 1017 return encode_nested_example(self, example) 1018 1019 def encode_batch(self, batch): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 892 return schema.encode_example(obj) 893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 894 return obj /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data) 665 # If a string is given, convert to associated integer 666 if isinstance(example_data, str): --> 667 example_data = self.str2int(example_data) 668 669 # Allowing -1 to mean no label. /usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values) 623 if value not in self._str2int: 624 value = str(value).strip() --> 625 output.append(self._str2int[str(value)]) 626 else: 627 # No names provided, try to integerize KeyError: ' ' ```
closed
https://github.com/huggingface/datasets/issues/2826
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
{ "login": "adeepH", "id": 46108405, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
976,584,926
2,825
The datasets.map function does not load cached dataset after moving python script
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files. ## Steps to reproduce the bug Just run the following codes in different .py files. ```python if __name__ == '__main__': from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) ``` ## Expected results The map function should reload data in the second or any later runs. ## Actual results The processing happens in each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: linux - Python version: 3.7.6 - PyArrow version: 3.0.0 This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
closed
https://github.com/huggingface/datasets/issues/2825
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
{ "login": "hobbitlzy", "id": 35392624, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
976,394,721
2,824
Fix defaults in cache_dir docstring in load.py
Fix defaults in the `cache_dir` docstring.
closed
https://github.com/huggingface/datasets/pull/2824
2021-08-22T14:48:37
2021-08-26T13:23:32
2021-08-26T11:55:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
976,135,355
2,823
HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset
closed
https://github.com/huggingface/datasets/issues/2823
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
{ "login": "rp2839", "id": 8453798, "type": "User" }
[]
false
[]
975,744,463
2,822
Add url prefix convention for many compression formats
## Intro When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`. In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS: - `gz://file.txt::https://foo.bar/file.txt.gz` - `bz2://file.txt::https://foo.bar/file.txt.bz2` - `zip://::https://foo.bar/archive.zip` - `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`) This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing ```python def _generate_examples(self, urlpath): with open(urlpath) as f: .... ``` ## What it changes This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use. ## Additional notes This PR should close https://github.com/huggingface/datasets/issues/2813 It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit: ```python load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip") ``` This is the exact same convention as fsspec and it removes all ambiguities cc @albertvillanova @lewtun
closed
https://github.com/huggingface/datasets/pull/2822
2021-08-20T16:11:23
2021-08-23T15:59:16
2021-08-23T15:59:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
975,556,032
2,821
Cannot load linnaeus dataset
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-4-7ef3a88f6276> in <module>() 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("linnaeus") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 605 raise ConnectionError("Couldn't reach {}".format(url)) 606 607 # Try a second time ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ```
closed
https://github.com/huggingface/datasets/issues/2821
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
975,210,712
2,820
Downloading “reddit” dataset keeps timing out.
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
closed
https://github.com/huggingface/datasets/issues/2820
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
{ "login": "smeyerhot", "id": 43877130, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
974,683,155
2,819
Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
closed
https://github.com/huggingface/datasets/pull/2819
2021-08-19T13:47:45
2021-09-29T08:13:44
2021-09-23T17:49:05
{ "login": "abhik1505040", "id": 49608995, "type": "User" }
[]
true
[]
974,552,009
2,818
cannot load data from my loacal path
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tarin)) # loading data by load_dataset data = load_dataset('csv',data_files=config.train_path) print(len(data)) ``` ## Steps to reproduce the bug ```python C:\Users\wie\Documents\项目\文本分类\data\train.csv 7613 Traceback (most recent call last): File "c:/Users/wie/Documents/项目/文本分类/lib/DataPrecess.py", line 17, in <module> data = load_dataset('csv',data_files=config.train_path) File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 830, in load_dataset **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 271, in __init__ **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 386, in _create_builder_config config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 156, in create_config_id raise ValueError("Please provide a valid `data_files` in `DatasetBuilder`") ValueError: Please provide a valid `data_files` in `DatasetBuilder` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: win10 - Python version: 3.7.9 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2818
2021-08-19T11:13:30
2023-07-25T17:42:15
2023-07-25T17:42:15
{ "login": "yang-collect", "id": 46920280, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
974,486,051
2,817
Rename The Pile subsets
After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names. I'm doing the changes for the subsets that @richarddwang added: - [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801 - [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803 - [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802 For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think. (we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`)
closed
https://github.com/huggingface/datasets/pull/2817
2021-08-19T09:56:22
2021-08-23T16:24:10
2021-08-23T16:24:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
974,031,404
2,816
Add Mostly Basic Python Problems Dataset
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/google-research/google-research/tree/master/mbpp - **Motivation:** Simple, small dataset related to coding problems. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2816
2021-08-18T20:28:39
2021-09-10T08:04:20
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
973,862,024
2,815
Tiny typo fixes of "fo" -> "of"
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
closed
https://github.com/huggingface/datasets/pull/2815
2021-08-18T16:36:11
2021-08-19T08:03:02
2021-08-19T08:03:02
{ "login": "aronszanto", "id": 9934829, "type": "User" }
[]
true
[]
973,632,645
2,814
Bump tqdm version
The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows.
closed
https://github.com/huggingface/datasets/pull/2814
2021-08-18T12:51:29
2021-08-18T13:44:11
2021-08-18T13:39:50
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
973,470,580
2,813
Remove compression from xopen
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming In order to fulfill these requirements, streaming implementation patched some Python functions: - the `open(urlpath)` function was patched with `fsspec.open(urlpath)` - the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open` Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,... Under the hood, the implementation: - passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)` Some concerns have been raised about passing the parameter `compression` to `fsspec.open`: - https://github.com/huggingface/datasets/pull/2786#discussion_r689550254 - #2811 The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset: ```python gzip.open(open(urlpath ``` While this is true: - it is not natural/usual to call `open` inside `gzip.open` (never seen this before) - indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming) In this particular case, there is a natural fix solution: #2811: - Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath` - Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"` Are there other issues apart from this? Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just - `gzip.open` - `open` (after having called dl_manager.download_and_extract) TODO: - [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic. - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`: - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July) - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming - [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons?
closed
https://github.com/huggingface/datasets/issues/2813
2021-08-18T09:35:59
2021-08-23T15:59:14
2021-08-23T15:59:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "generic discussion", "color": "c5def5" } ]
false
[]
972,936,889
2,812
arXiv Dataset verification problem
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
open
https://github.com/huggingface/datasets/issues/2812
2021-08-17T18:01:48
2022-01-19T14:15:35
null
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
972,522,480
2,811
Fix stream oscar
Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4. This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921 This PR: - removes that additional `open` - patches `gzip.open` with `xopen` + `compression="gzip"`
closed
https://github.com/huggingface/datasets/pull/2811
2021-08-17T10:10:59
2021-08-26T10:26:15
2021-08-26T10:26:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
972,040,022
2,810
Add WIT Dataset
Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset.
closed
https://github.com/huggingface/datasets/pull/2810
2021-08-16T19:34:09
2022-05-06T12:27:29
2022-05-06T12:26:16
{ "login": "hassiahk", "id": 13920778, "type": "User" }
[]
true
[]
971,902,613
2,809
Add Beans Dataset
Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset.
closed
https://github.com/huggingface/datasets/pull/2809
2021-08-16T16:22:33
2021-08-26T11:42:27
2021-08-26T11:42:27
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
971,882,320
2,808
Enable streaming for Wikipedia corpora
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import load_dataset # Throws ValueError: Builder wikipedia is not streamable. wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ``` Given that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :) **Describe the solution you'd like** It would be nice to be able to stream Wikipedia corpora from the Hub with something like ```python from datasets import load_dataset wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ```
closed
https://github.com/huggingface/datasets/issues/2808
2021-08-16T15:59:12
2023-07-20T13:45:30
2023-07-20T13:45:30
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
971,849,863
2,807
Add cats_vs_dogs dataset
Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset.
closed
https://github.com/huggingface/datasets/pull/2807
2021-08-16T15:21:11
2021-08-30T16:35:25
2021-08-30T16:35:24
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
971,625,449
2,806
Fix streaming tar files from canonical datasets
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both from: - canonical datasets scripts and - data files. This PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...
closed
https://github.com/huggingface/datasets/pull/2806
2021-08-16T11:10:28
2021-10-13T09:04:03
2021-10-13T09:04:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
971,436,456
2,805
Fix streaming zip files from canonical datasets
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This PR fixes this issue and allows streaming zip files both from: - canonical datasets scripts and - data files.
closed
https://github.com/huggingface/datasets/pull/2805
2021-08-16T07:11:40
2021-08-16T10:34:00
2021-08-16T10:34:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
971,353,437
2,804
Add Food-101
Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
closed
https://github.com/huggingface/datasets/pull/2804
2021-08-16T04:26:15
2021-08-20T14:31:33
2021-08-19T12:48:06
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
970,858,928
2,803
add stack exchange
stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. I also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
closed
https://github.com/huggingface/datasets/pull/2803
2021-08-14T08:11:02
2021-08-19T10:07:33
2021-08-19T08:07:38
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,848,302
2,802
add openwebtext2
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
closed
https://github.com/huggingface/datasets/pull/2802
2021-08-14T07:09:03
2021-08-23T14:06:14
2021-08-23T14:06:14
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,844,617
2,801
add books3
books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
closed
https://github.com/huggingface/datasets/pull/2801
2021-08-14T07:04:25
2021-08-19T16:43:09
2021-08-18T15:36:59
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,819,988
2,800
Support streaming tar files
This PR adds support to stream tar files by using the `fsspec` tar protocol. It also uses the custom `readline` implemented in PR #2786. The corresponding test is implemented in PR #2786.
closed
https://github.com/huggingface/datasets/pull/2800
2021-08-14T04:40:17
2021-08-26T10:02:30
2021-08-14T04:55:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
970,507,351
2,799
Loading JSON throws ArrowNotImplementedError
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps. You can find a Colab notebook which reproduces the error [here](https://colab.research.google.com/drive/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing). **Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :) ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns https://huggingface.co/datasets/lewtun/github-issues-test/resolve/main/issues-datasets.jsonl data_files = hf_hub_url(repo_id="lewtun/github-issues-test", filename="issues-datasets.jsonl", repo_type="dataset") # throws ArrowNotImplementedError dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas ... df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to `pandas`. ## Actual results ``` --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) <ipython-input-7-5b8e82b6c3a2> in <module>() ----> 1 dset = load_dataset("json", data_files=data_files, split="test") 9 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: JSON conversion to struct<url: timestamp[s], html_url: timestamp[s], labels_url: timestamp[s], id: int64, node_id: timestamp[s], number: int64, title: timestamp[s], description: timestamp[s], creator: struct<login: timestamp[s], id: int64, node_id: timestamp[s], avatar_url: timestamp[s], gravatar_id: timestamp[s], url: timestamp[s], html_url: timestamp[s], followers_url: timestamp[s], following_url: timestamp[s], gists_url: timestamp[s], starred_url: timestamp[s], subscriptions_url: timestamp[s], organizations_url: timestamp[s], repos_url: timestamp[s], events_url: timestamp[s], received_events_url: timestamp[s], type: timestamp[s], site_admin: bool>, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2799
2021-08-13T15:31:48
2022-01-10T18:59:32
2022-01-10T18:59:32
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
970,493,126
2,798
Fix streaming zip files
Currently, streaming remote zip data files gives `FileNotFoundError` message: ```python data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) next(iter(ds)) ``` This PR fixes it by adding a glob string. The corresponding test is implemented in PR #2786.
closed
https://github.com/huggingface/datasets/pull/2798
2021-08-13T15:17:01
2021-08-16T14:16:50
2021-08-13T15:38:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
970,331,634
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information. - Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again. - Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser. - if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal) - dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us. **Describe the solution you'd like** - Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow - We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
open
https://github.com/huggingface/datasets/issues/2797
2021-08-13T11:54:49
2021-08-14T08:42:09
null
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
970,235,846
2,796
add cedr dataset
null
closed
https://github.com/huggingface/datasets/pull/2796
2021-08-13T09:37:35
2021-08-27T16:01:36
2021-08-27T16:01:36
{ "login": "naumov-al", "id": 22640075, "type": "User" }
[]
true
[]
969,728,545
2,794
Warnings and documentation about pickling incorrect
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262 Docs: > For a transform to be hashable, it needs to be pickleable using dill or pickle. > – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting) For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default: https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83 ... and the hashing will fail if it fails. ### Enhancement I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139: ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` I think add this to the docs will help future users quickly debug any hashing troubles of their own :-) ## Steps to reproduce the bug `dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643 ## Expected results If either `dill` or `pickle` can successfully hash, the hashing will succeed. ## Actual results If `dill` or `pickle` cannot hash, the hashing fails. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
open
https://github.com/huggingface/datasets/issues/2794
2021-08-12T23:09:13
2021-08-12T23:09:31
null
{ "login": "mbforbes", "id": 1170062, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
968,967,773
2,793
Fix type hint for data_files
Fix type hint for `data_files` in signatures and docstrings.
closed
https://github.com/huggingface/datasets/pull/2793
2021-08-12T14:42:37
2021-08-12T15:35:29
2021-08-12T15:35:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
968,650,274
2,792
Update: GooAQ - add train/val/test splits
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
closed
https://github.com/huggingface/datasets/pull/2792
2021-08-12T11:40:18
2021-08-27T15:58:45
2021-08-27T15:58:14
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
968,360,314
2,791
Fix typo in cnn_dailymail
null
closed
https://github.com/huggingface/datasets/pull/2791
2021-08-12T08:38:42
2021-08-12T11:17:59
2021-08-12T11:17:59
{ "login": "omaralsayed", "id": 42531544, "type": "User" }
[]
true
[]
967,772,181
2,790
Fix typo in test_dataset_common
null
closed
https://github.com/huggingface/datasets/pull/2790
2021-08-12T01:10:29
2021-08-12T11:31:29
2021-08-12T11:31:29
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
967,361,934
2,789
Updated dataset description of DaNE
null
closed
https://github.com/huggingface/datasets/pull/2789
2021-08-11T19:58:48
2021-08-12T16:10:59
2021-08-12T16:06:01
{ "login": "KennethEnevoldsen", "id": 23721977, "type": "User" }
[]
true
[]
967,149,389
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=['train[:8]', 'test[:8]', 'val[:8]'] ) ``` However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists. I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split. Is this type of splitting supported? If so, how can I do it?
closed
https://github.com/huggingface/datasets/issues/2788
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
{ "login": "brijow", "id": 11220949, "type": "User" }
[]
false
[]
967,018,406
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset use_auth_token=use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py Trying to do python run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/mrpc/ Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago. Thank you!
closed
https://github.com/huggingface/datasets/issues/2787
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
{ "login": "jinec", "id": 39627475, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
966,282,934
2,786
Support streaming compressed files
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
closed
https://github.com/huggingface/datasets/pull/2786
2021-08-11T09:02:06
2021-08-17T05:28:39
2021-08-16T06:36:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
965,461,382
2,783
Add KS task to SUPERB
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_commands/dataset.py) - [TFDS implementation](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/speech_commands.py) Some notable quirks: - The dataset is originally single-archive (train+val+test all in one), but the test set has a "canonical" distribution in a separate archive, which is also used here (see `_split_ks_files()`). - The `_background_noise_`/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime) Related to #2619.
closed
https://github.com/huggingface/datasets/pull/2783
2021-08-10T22:14:07
2021-08-12T16:45:01
2021-08-11T20:19:17
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
964,858,439
2,782
Fix renaming of corpus_bleu args
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`. This is a partial hotfix of #2781. Close #2781.
closed
https://github.com/huggingface/datasets/pull/2782
2021-08-10T11:02:34
2021-08-10T11:16:07
2021-08-10T11:16:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,805,351
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #2739 - #2778 - Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`: - #2779 - `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`: - #2782
closed
https://github.com/huggingface/datasets/issues/2781
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
964,794,764
2,780
VIVOS dataset for Vietnamese ASR
null
closed
https://github.com/huggingface/datasets/pull/2780
2021-08-10T09:47:36
2021-08-12T11:09:30
2021-08-12T11:09:30
{ "login": "binh234", "id": 57580923, "type": "User" }
[]
true
[]
964,775,085
2,779
Fix sacrebleu tokenizers
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`. Eventually, this should be further fixed in order to use only public functions. This is a partial hotfix of #2781.
closed
https://github.com/huggingface/datasets/pull/2779
2021-08-10T09:24:27
2021-08-10T11:03:08
2021-08-10T10:57:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,737,422
2,778
Do not pass tokenize to sacrebleu
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will use its default, no matter where it is and how it is called. Related to #2739. This is a partial hotfix of #2781.
closed
https://github.com/huggingface/datasets/pull/2778
2021-08-10T08:40:37
2021-08-10T10:03:37
2021-08-10T10:03:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,696,380
2,777
Use packaging to handle versions
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
closed
https://github.com/huggingface/datasets/pull/2777
2021-08-10T07:51:39
2021-08-18T13:56:27
2021-08-18T13:56:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,400,596
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`: Quote: > The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero. Context: trying to use `config.HF_DATASETS_OFFLINE` here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48 but are uncertain if it's safe, since it's not documented as a public API. Thank you! @lhoestq, @albertvillanova
open
https://github.com/huggingface/datasets/issues/2776
2021-08-09T21:23:17
2021-08-09T21:23:17
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
964,303,626
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below. Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265 However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like: ```text Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow ``` The path is exactly the same each run (e.g., last 26 runs). This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000. I think that https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248 ... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below. ## Steps to reproduce the bug ```python # Contents of print_fingerprint.py from transformers import set_seed from datasets.fingerprint import generate_random_fingerprint set_seed(42) print(generate_random_fingerprint()) ``` ```bash for i in {0..10}; do python print_fingerprint.py done 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d ``` ## Expected results After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused. ## Actual results After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2775
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
{ "login": "mbforbes", "id": 1170062, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
963,932,199
2,774
Prevent .map from using multiprocessing when loading from cache
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get: ``` Traceback (most recent call last): File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker put((job, i, result)) File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put self._writer.send_bytes(obj) File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes self._send_bytes(m[offset:offset + size]) File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes self._send(header + buf) File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe ``` Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint. Instead what we suggest: - Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache. ## Current issues ~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~ **EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`: - sequential : `datasets.arrow_dataset.Dataset._map_single` - multiprocessing: `datasets.arrow_dataset._map_single` This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue. ## What was done ~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~ I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method. ## TODO - [x] Check that the multiprocessed version and the sequential version output the same output - [x] Check that sequential can load multiprocessed - [x] Check that multiprocessed can load sequential ## Test ```python from datasets import load_dataset from multiprocessing import Pool import random def process(batch, rng): length = len(batch["text"]) return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]} dataset = load_dataset("stas/openwebtext-10k", split="train") print(dataset.column_names) print(type(dataset)) rng = random.Random(42) dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}) # This one should be loaded from cache rng = random.Random(42) dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True) # Just to check that the random generator was correct print(dataset1[-1]["processed_text"]) print(dataset2[-1]["processed_text"]) ``` ## Other solutions I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache). Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing). cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable.
closed
https://github.com/huggingface/datasets/pull/2774
2021-08-09T12:11:38
2021-09-09T10:20:28
2021-09-09T10:20:28
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
963,730,497
2,773
Remove dataset_infos.json
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",... However, there are others that do not seem too meaningful in the README, like the checksums. **Describe the solution you'd like** Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept. cc: @julien-c @lhoestq
closed
https://github.com/huggingface/datasets/issues/2773
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
963,348,834
2,772
Remove returned feature constrain
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
open
https://github.com/huggingface/datasets/issues/2772
2021-08-08T04:01:30
2021-08-08T08:48:01
null
{ "login": "PosoSAgapo", "id": 33200481, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
963,257,036
2,771
[WIP][Common Voice 7] Add common voice 7.0
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need to manually the dataset from `https://commonvoice.mozilla.org/en/datasets`. Make sure you choose the version `Common Voice Corpus 7.0`. Choose a language of your choice and find the corresponding language-id, *e.g.*, `Abkhaz` with language-id `ab`. The following language-ids are available: ['ab', 'ar', 'as', 'az', 'ba', 'bas', 'be', 'bg', 'br', 'ca', 'cnh', 'cs', 'cv', 'cy', 'de', 'dv', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy-NL', 'ga-IE', 'gl', 'gn', 'ha', 'hi', 'hsb', 'hu', 'hy-AM', 'ia', 'id', 'it', 'ja', 'ka', 'kab', 'kk', 'kmr', 'ky', 'lg', 'lt', 'lv', 'mn', 'mt', 'nl', 'or', 'pa-IN', 'pl', 'pt', 'rm-sursilv', 'rm-vallader', 'ro', 'ru', 'rw', 'sah', 'sk', 'sl', 'sr', 'sv-SE', 'ta', 'th', 'tr', 'tt', 'ug', 'uk', 'ur', 'uz', 'vi', 'vot', 'zh-CN', 'zh-HK', 'zh-TW'] Next, you will have to enter your email address to download the dataset in the `tar.gz` format. Save the file under <path-to-file>. The file should then be extracted with: ``tar -xvzf <path-to-file>`` which will extract a folder called ``cv-corpus-7.0-2021-07-21``. The dataset can then be loaded with `datasets.load_dataset("common_voice", <language-id>, data_dir="<path-to-'cv-corpus-7.0-2021-07-21'-folder>", ignore_verifications=True). ``` Having followed those instructions one can then download the data as follows: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab", data_dir="./cv-corpus-7.0-2021-07-21/", ignore_verifications=True) ``` ## TODO - [ ] Discuss naming. Is the name ok here "common_voice_7"? The dataset script differs only really in one point from `common_voice.py` in that all the metadata is different (more hours etc...) and that it has to use manual data dir for now - [ ] Ideally we should get a bundled download link. For `common_voice.py` there is a bundled download link: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/{}.tar.gz` that allows one to directly download the data. However such a link is missing for Common Voice 7. I guess we should try to contact common voice about it and ask whether we could host the data or help otherwise somehow. See: https://github.com/common-voice/common-voice-bundler/issues/15 cc @yjernite - [ ] I did not compute the dataset.json and it would mean that I'd have to download 76 datasets totalling around 1TB manually before running the checksum command. This just takes too much time. For now the user will have to add a `ignore_verifications=True` to download the data. This step would also be much easier if we could get a bundled link - [ ] Add dummy data
closed
https://github.com/huggingface/datasets/pull/2771
2021-08-07T16:01:10
2021-12-06T23:24:02
2021-12-06T23:24:02
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
963,246,512
2,770
Add support for fast tokenizer in BertScore
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
closed
https://github.com/huggingface/datasets/pull/2770
2021-08-07T15:00:03
2021-08-09T12:34:43
2021-08-09T11:16:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
963,240,802
2,769
Allow PyArrow from source
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
closed
https://github.com/huggingface/datasets/pull/2769
2021-08-07T14:26:44
2021-08-09T15:38:39
2021-08-09T15:38:39
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
963,229,173
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ds = ds.add_column('ones', [1]*128) ``` ## Expected results I would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column. ## Actual results Specify the actual results or traceback. ```python --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) /var/folders/l4/2905jygx4tx5jv8_kn03vxsw0000gn/T/ipykernel_6301/868709636.py in <module> 1 from datasets import load_dataset 2 ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ----> 3 ds = ds.add_column('ones', [0]*128) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 2965 column_table = InMemoryTable.from_pydict({name: column}) 2966 # Concatenate tables horizontally -> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 2968 # Update features 2969 info = self.info.copy() ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 715 table_blocks = to_blocks(table) 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 717 return cls.from_blocks(blocks) 718 719 @property ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 663 return cls(table, blocks) 664 else: --> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks) 666 return cls(table, blocks) 667 ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks) 623 if not tables: 624 continue --> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 612 else: 613 for name, col in zip(table.column_names, table.columns): --> 614 pa_table = pa_table.append_column(name, col) 615 return pa_table 616 else: ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128 ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2768
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
963,002,120
2,767
equal operation to perform unbatch for huggingface datasets
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did: https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925 Here please find an example: For example, a typical example from ReCoRD might look like { 'passsage': 'This is the passage.', 'query': 'A @placeholder is a bird.', 'entities': ['penguin', 'potato', 'pigeon'], 'answers': ['penguin', 'pigeon'], } and I need a prosessor which would turn this example into the following two examples: { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'penguin', } and { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'pigeon', } For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help @lhoestq Thank you very much.
closed
https://github.com/huggingface/datasets/issues/2767
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
{ "login": "dorooddorood606", "id": 79288051, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]