html_url stringlengths 48 51 | title stringlengths 1 290 | comments listlengths 0 30 | body stringlengths 0 228k ⌀ | number int64 2 7.08k |
|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/5070 | Support default config name when no builder configs | [
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/... | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set.
However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
| 5,070 |
https://github.com/huggingface/datasets/issues/5061 | `_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | [
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess... | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map
transformed_shards[index] = async_result.get()
File ".../site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File ".../site-packages/multiprocess/connection.py", line 214, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File ".../site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File ".../site-packages/dill/_dill.py", line 620, in dump
StockPickler.dump(self, obj)
File ".../pickle.py", line 487, in dump
self.save(obj)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 902, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 578, in save
rv = reduce(self.proto)
File ".../logging/__init__.py", line 1774, in __reduce__
raise pickle.PicklingError('logger cannot be pickled')
_pickle.PicklingError: logger cannot be pickled
```
## Steps to reproduce the bug
Sorry I failed to have a minimal reproducible example, but the offending line on my end is
```python
dataset.map(
lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda
batched=True,
num_proc=4,
)
```
This does work when `num_proc=1`, so it's likely a multiprocessing thing.
## Expected results
`map` succeeds
## Actual results
The error trace above.
## Environment info
- `datasets` version: 1.16.1 and 2.5.1 both failed
- Platform: Ubuntu 20.04.4 LTS
- Python version: 3.10.4
- PyArrow version: 9.0.0
| 5,061 |
https://github.com/huggingface/datasets/issues/5060 | Unable to Use Custom Dataset Locally | [
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read fr... | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| 5,060 |
https://github.com/huggingface/datasets/issues/5053 | Intermittent JSON parse error when streaming the Pile | [
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host... | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq",
batch_size=curr_batch_size,
seq=seq_len - 1,
)
prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \
tokenizer.bos_token_id
return {
"text": np.concatenate([prefix, tokens], axis=1)
}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
ZStandard data:
Version: 0.18.0
Summary: Zstandard bindings for Python
Home-page: https://github.com/indygreg/python-zstandard
Author: Gregory Szorc
Author-email: gregory.szorc@gmail.com
License: BSD
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by: | 5,053 |
https://github.com/huggingface/datasets/issues/5050 | Restore saved format state in `load_from_disk` | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | 5,050 |
https://github.com/huggingface/datasets/issues/5046 | Audiofolder creates empty Dataset if files same level as metadata | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: htt... | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| 5,046 |
https://github.com/huggingface/datasets/issues/5045 | Automatically revert to last successful commit to hub when a push_to_hub is interrupted | [
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be imple... | **Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names don’t match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again.
**Describe the solution you'd like**
Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision?
**Describe alternatives you've considered**
Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem.
**Additional context**
Provide useful defaults
| 5,045 |
https://github.com/huggingface/datasets/issues/5044 | integrate `load_from_disk` into `load_dataset` | [
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it create... | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you! | 5,044 |
https://github.com/huggingface/datasets/issues/5039 | Hendrycks Checksum | [
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.tar']
```
| 5,039 |
https://github.com/huggingface/datasets/issues/5038 | `Dataset.unique` showing wrong output after filtering | [
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | ## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(dataset.unique('id'))
```
## Expected results
The above code should return an empty list since the dataset is empty.
## Actual results
```bash
[0]
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.14
- PyArrow version: 7.0.0
- Pandas version: 1.3.5 | 5,038 |
https://github.com/huggingface/datasets/issues/5032 | new dataset type: single-label and multi-label video classification | [
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type nee... | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model.
**Describe alternatives you've considered**
Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative.
**Additional context**
I am wiling to open a PR but don't know where to start.
| 5,032 |
https://github.com/huggingface/datasets/issues/5028 | passing parameters to the method passed to Dataset.from_generator() | [
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
... | Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[idx] + param1
ds = Dataset.from_generator(gen(param1))
```
| 5,028 |
https://github.com/huggingface/datasets/issues/5025 | Custom Json Dataset Throwing Error when batch is False | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessin... | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
#For this reason I couldn't set the batch to True.
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
```
It throws below error.
```
/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
172 storage = to_pyarrow_listarray(data, pa_type)
--> 173 return pa.ExtensionArray.from_storage(pa_type, storage)
174
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage()
TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>>
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
## Expected results
A clear and concise description of the expected results.
Expected would be similar to all the otherdatasets with no error.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Unix
- Python version: 3.9
- PyArrow version: 9.0.0
| 5,025 |
https://github.com/huggingface/datasets/issues/5023 | Text strings are split into lists of characters in xcsr dataset | [] | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
' ',
'h',
'a',
'n',
'd',
'l',
'e',
'd',
' ',
'a',
' ',
'l',
'o',
't',
' ',
'o',
'f',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'w',
'h',
'o',
' ',
'e',
'x',
'p',
'e',
'r',
'i',
'e',
'n',
'c',
'e',
'd',
' ',
't',
'r',
'a',
'u',
'm',
'a',
't',
'i',
'c',
' ',
'm',
'o',
'u',
't',
'h',
' ',
'i',
'n',
'j',
'u',
'r',
'y',
',',
' ',
'w',
'h',
'e',
'r',
'e',
' ',
'w',
'e',
'r',
'e',
' ',
't',
'h',
'e',
's',
'e',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'c',
'o',
'm',
'i',
'n',
'g',
' ',
'f',
'r',
'o',
'm',
'?'],
'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']},
{'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']},
{'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']},
{'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']},
{'label': ['E'],
'text': ['o',
'f',
'f',
'i',
'c',
'e',
' ',
'b',
'u',
'i',
'l',
'd',
'i',
'n',
'g']}]},
'answerKey': 'C'}
## Steps to reproduce the bug
```python
ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True)
item = next(iter(ds))
item
```
## Expected results
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}},
'answerKey': 'C'}
```
| 5,023 |
https://github.com/huggingface/datasets/issues/5021 | Split is inferred from filename and overrides metadata.jsonl | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", da... | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce the bug
`metadata.jsonl`
```json
{"file_name": "photo of a cat.jpg", "text": "a photo of a cat"}
{"file_name": "photo of a dog.jpg", "text": "a photo of a dog"}
{"file_name": "photo of a train.jpg", "text": "a photo of a train"}
{"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"}
```
`bug.py`
```python
from datasets import load_dataset
dataset = load_dataset("dataset")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# test: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# })
for split in dataset:
for n in dataset[split]:
print(n['text'])
# a photo of a train
# a photo of test tubes
```
## Expected results
One single dataset with all four images / a warning for unused files / documentation of this behavior
## Actual results
Only the images with "test" or "train" in the name are loaded
## Environment info
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.0 | 5,021 |
https://github.com/huggingface/datasets/issues/5017 | xcsr: X-CSQA simply uses english for all alleged non-english data | [
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR
## Steps to reproduce the bug
```python
# let's say you want to load the french X-CSQA subcollection
french = datasets.load_dataset("xcsr", "X-CSQA-fr")
# for good measure, let's load english too
english = datasets.load_dataset("xcsr", "X-CSQA-en")
# let's inspect
"".join(english['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
"".join(french['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
# what? Why are they both in english?
# I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset
# maybe i need to look better?
french['test'].unique('lang')
# output: ['en']
# no, it's all english
```
## Expected results
Accessing a subcollection in language X should return a subcollection containg samples in language X
## Actual results
Accessing a subcollection in language X returns a subcollection containing samples in English.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| 5,017 |
https://github.com/huggingface/datasets/issues/5015 | Transfer dataset scripts to Hub | [
"Sounds good ! Can I help with anything ?"
] | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub
- [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub
- [ ] Issues
Finally:
- [x] #4974
Let me know what you think! :hugs: | 5,015 |
https://github.com/huggingface/datasets/issues/5014 | I need to read the custom dataset in conll format | [
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r... | I need to read the custom dataset in conll format
| 5,014 |
https://github.com/huggingface/datasets/issues/5013 | would huggingface like publish cpp binding for datasets package ? | [
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingfac... | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | 5,013 |
https://github.com/huggingface/datasets/issues/5012 | Force JSON format regardless of file naming on S3 | [
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime",
"Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I ... | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming? | 5,012 |
https://github.com/huggingface/datasets/issues/5011 | Audio: `encode_example` fails with IndexError | [
"Sorry bug on my part 😅 Closing "
] | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally.
Don't think it's a sound file bug as the version matches what worked previously.
Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly...
## Steps to reproduce the bug
```python
from datasets import load_dataset
earnings22 = load_dataset("sanchit-gandhi/earnings22_split")
```
## Expected results
```
>>> earnings22
DatasetDict({
validation: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2650
})
train: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 52006
})
test: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2735
})
})
```
## Actual results
```
Traceback (most recent call last):
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
writer.write(example)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write
self.write_examples_on_file()
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 231, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature
return feature.cast_storage(array)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp>
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write
channels = data.shape[1]
IndexError: tuple index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
Plus:
- SoundFile version: 0.10.3.post1
cc @lhoestq @polinaeterna | 5,011 |
https://github.com/huggingface/datasets/issues/5009 | Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | [
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the ... | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy')
```
## Expected results
Successfully load the `StonyBrookNLP/tellmewhy` dataset.
## Actual results
```
Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff
Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253...
Downloading data files: 100%|██████████████████████████████| 3/3 [00:00<00:00, 957.46it/s]
Extracting data files: 100%|███████████████████████████████| 3/3 [00:00<00:00, 299.14it/s]
Traceback (most recent call last):
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module>
main(args)
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main
dataset = datasets.load_dataset(args.dataset_name)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type int64 to null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| 5,009 |
https://github.com/huggingface/datasets/issues/5005 | Release 2.5.0 breaks transformers CI | [
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | ## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
| 5,005 |
https://github.com/huggingface/datasets/issues/5002 | Dataset Viewer issue for loubnabnl/humaneval-x | [
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | ### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes | 5,002 |
https://github.com/huggingface/datasets/issues/5000 | Dataset Viewer issue for asapp/slue | [
"<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=a... | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | 5,000 |
https://github.com/huggingface/datasets/issues/4996 | Dataset Viewer issue for Jean-Baptiste/wikiner_fr | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: ... | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No | 4,996 |
https://github.com/huggingface/datasets/issues/4995 | Get a specific Exception when the dataset has no data | [] | In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files.
In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data.
To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files.
It could be done by raising a custom exception, for example, `NoDataError`. | 4,995 |
https://github.com/huggingface/datasets/issues/4994 | delete the hardcoded license list in `datasets` | [] | > Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now?
_Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_ | 4,994 |
https://github.com/huggingface/datasets/issues/4990 | "no-token" is passed to `huggingface_hub` when token is `None` | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @... | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | 4,990 |
https://github.com/huggingface/datasets/issues/4989 | Running add_column() seems to corrupt existing sequence-type column info | [
"Nevermind, I was incorrect."
] | I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:
ds = load_dataset(...)
df = ds.to_pandas()
df:
foo_0 | foo_1 | foo_2 | foo_3
0.0 | 1.0 | 2.0 | 3.0
If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:
ds = load_dataset(...)
new_ds = ds.add_column("new_col", data)
df = new_ds.to_pandas()
df:
foo | new_col
[0.0, 1.0, 2.0, 3.0] | new_val
I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different. | 4,989 |
https://github.com/huggingface/datasets/issues/4988 | Add `IterableDataset.from_generator` to the API | [
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq | 4,988 |
https://github.com/huggingface/datasets/issues/4983 | How to convert torch.utils.data.Dataset to huggingface dataset? | [
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:... | I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks. | 4,983 |
https://github.com/huggingface/datasets/issues/4982 | Create dataset_infos.json with VALIDATION and TEST splits | [
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it wo... | The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569).
> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:
> ValueError: Unknown split "test". Should be one of ['train'].
>
> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN
>
> You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)
I tried to clear the cache folder, than I got an another error. I run:
```
git clone https://huggingface.co/datasets/sberbank-ai/Peter
cd Peter
git checkout add_splits # switch to a add_splits branch
rm dataset_infos.json # remove local dataset_infos.json
rm -r ~/.cache/huggingface # remove cached dataset_infos.json
datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json
```
The error message:
```
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s]
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/local/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run
builder.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators
data_files = dl_manager.download_and_extract(_URLS)
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract
extracted_paths = map_nested(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
mapped = [
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path
output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract
self.extractor.extract(input_path, output_path, extractor_format)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract
with FileLock(lock_path):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
FileNotFoundError: [Errno 2] No such file or directory: ''
Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__
self.release(force=True)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release
with self._thread_lock:
AttributeError: 'UnixFileLock' object has no attribute '_thread_lock'
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]
```
Can you help me please?
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.5
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| 4,982 |
https://github.com/huggingface/datasets/issues/4981 | Can't create a dataset with `float16` features | [
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",... | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error.
The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases?
Thanks!
## Steps to reproduce the bug
All of the following raise the following error with the same exact (as far as I can tell) traceback:
```python
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
```python
from datasets import Dataset, Features, Value
Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16")))
import numpy as np
Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16")))
import torch
Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16")))
```
## Expected results
A dataset with `float16` features is successfully created.
## Actual results
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
Cell In [14], line 1
----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16")))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)
865 mapping = features.encode_batch(mapping)
866 mapping = {
867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)
868 for col, data in mapping.items()
869 }
--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)
871 if info.features is None:
872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)
734 @classmethod
735 def from_pydict(cls, *args, **kwargs):
736 """
737 Construct a Table from Arrow arrays or columns
738
(...)
748 :class:`datasets.table.Table`:
749 """
--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)
192 # otherwise we can finally use the user's type
193 elif type is not None:
194 # We use cast_array_to_feature to support casting to custom types like Audio and Image
195 # Also, when trying type "string", we don't want to convert integers or floats to "string".
196 # We only do it if trying_type is False - since this is what the user asks for.
--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
198 return out
199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)
1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str)
1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
-> 1762 return array.cast(pa_type)
1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options)
387 else:
388 options = CastOptions.safe(target_type)
--> 389 return call_function("cast", [arr], options)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| 4,981 |
https://github.com/huggingface/datasets/issues/4980 | Make `pyarrow` optional | [
"The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rew... | **Is your feature request related to a problem? Please describe.**
Is `pyarrow` really needed for every dataset?
**Describe the solution you'd like**
It is made optional.
**Describe alternatives you've considered**
Likely, no.
| 4,980 |
https://github.com/huggingface/datasets/issues/4977 | Providing dataset size | [
"Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in th... | **Is your feature request related to a problem? Please describe.**
Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).
**Describe the solution you'd like**
Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some).
**Describe alternatives you've considered**
People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face:
**Additional context**
Mentioned to @lhoestq
| 4,977 |
https://github.com/huggingface/datasets/issues/4976 | Hope to adapt Python3.9 as soon as possible | [
"Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?",
"There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^",
"Perhaps we should report this issue in the `filelock` repo?"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| 4,976 |
https://github.com/huggingface/datasets/issues/4965 | [Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback() | [
"Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.",
"Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?",
"Hi @hoangtnm - I upgraded to python 3.10 and it fixed the proble... | ## Describe the bug
I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work.
## Steps to reproduce the bug
```python
import datasets
dataset = load_dataset("csv", data_files="./train.csv")["train"]
dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])})
dataset = dataset.cast_column("audio", Audio())
dataset[0]
```
## Expected results
```
{'audio': {'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'},
'english_transcription': 'I would like to set up a joint account with my partner',
'intent_class': 11,
'lang_id': 4,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'transcription': 'I would like to set up a joint account with my partner'}
```
## Actual results
````---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 dataset[0]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)
2163 def __getitem__(self, key): # noqa: F811
2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2165 return self._getitem(
2166 key,
2167 )
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)
2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2150 formatted_output = format_table(
2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2152 )
2153 return formatted_output
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
-> 1647 return {
1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
1647 return {
-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)
1257 # Object with special decoding:
1258 elif isinstance(schema, (Audio, Image)):
1259 # we pass the token to read and decode files from private repositories in streaming mode
-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
1261 return obj
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)
154 array, sampling_rate = self._decode_non_mp3_file_like(file)
155 else:
--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
157 return {"path": path, "array": array, "sampling_rate": sampling_rate}
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)
254 use_auth_token = None
256 with xopen(path, "rb", use_auth_token=use_auth_token) as f:
--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
258 return array, sampling_rate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
86 extra_args = len(args) - len(all_args)
87 if extra_args <= 0:
---> 88 return f(*args, **kwargs)
90 # extra_args > 0
91 args_msg = [
92 "{}={}".format(name, arg)
93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])
94 ]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)
161 else:
162 # Otherwise try soundfile first, and then fall back if necessary
163 try:
--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)
166 except RuntimeError as exc:
167 # If soundfile failed, try audioread instead
168 if isinstance(path, (str, pathlib.PurePath)):
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype)
192 context = path
193 else:
194 # Otherwise, create the soundfile object
--> 195 context = sf.SoundFile(path)
197 with context as sf_desc:
198 sr_native = sf_desc.samplerate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)
1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)
1178 elif _has_virtual_io_attrs(file, mode_int):
-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),
1180 mode_int, self._info, _ffi.NULL)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)
1194 def _init_virtual_io(self, file):
1195 """Initialize callback functions for sf_open_virtual()."""
1196 @_ffi.callback("sf_vio_get_filelen")
-> 1197 def vio_get_filelen(user_data):
1198 curr = file.tell()
1199 file.seek(0, SEEK_END)
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | 4,965 |
https://github.com/huggingface/datasets/issues/4964 | Column of arrays (2D+) are using unreasonably high memory | [
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them ... | ## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, Array3D
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")}))
```
the code above will use about 10Gb of RAM while constructing the `dataset` object.
The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.
```python
from datasets import Dataset
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data})
dataset[column_name]
```
## Expected results
Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.
## Actual results
Enormous memory- and runtime overhead.
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | 4,964 |
https://github.com/huggingface/datasets/issues/4963 | Dataset without script does not support regular JSON data file | [
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] | ### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes | 4,963 |
https://github.com/huggingface/datasets/issues/4961 | fsspec 2022.8.2 breaks xopen in streaming mode | [
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` relea... | ## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
## Expected results
Dataset should load as iterator.
## Actual results
```
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1737 # Return iterable dataset in case of streaming
1738 if streaming:
-> 1739 return builder_instance.as_streaming_dataset(split=split)
1740
1741 # Some datasets are already processed on the HF google storage
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1023 )
1024 self._check_manual_download(dl_manager)
-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
1026 # By default, return all splits
1027 if split is None:
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split)
267 # for streaming case
268 def _download_audio_archives(dl_manager, lang, format, split):
--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)
270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split)
251 n_files_path = dl_manager.download(n_files_url)
252
--> 253 with open(n_files_path, "r", encoding="utf-8") as file:
254 n_files = int(file.read().strip()) # the file contains a number of archives
255
ValueError: I/O operation on closed file.
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| 4,961 |
https://github.com/huggingface/datasets/issues/4960 | BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | [
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioas... | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: 'BuilderConfig' object has no attribute 'schema'`
<details>
```
Using custom data configuration default-a1ca3e05be5abf2f
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1720 ignore_verifications = ignore_verifications or save_infos
1722 # Create a dataset builder
-> 1723 builder_instance = load_dataset_builder(
1724 path=path,
1725 name=name,
1726 data_dir=data_dir,
1727 data_files=data_files,
1728 cache_dir=cache_dir,
1729 features=features,
1730 download_config=download_config,
1731 download_mode=download_mode,
1732 revision=revision,
1733 use_auth_token=use_auth_token,
1734 **config_kwargs,
1735 )
1737 # Return iterable dataset in case of streaming
1738 if streaming:
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1523 raise ValueError(error_msg)
1525 # Instantiate the dataset builder
-> 1526 builder_instance: DatasetBuilder = builder_cls(
1527 cache_dir=cache_dir,
1528 config_name=config_name,
1529 data_dir=data_dir,
1530 data_files=data_files,
1531 hash=hash,
1532 features=features,
1533 use_auth_token=use_auth_token,
1534 **builder_kwargs,
1535 **config_kwargs,
1536 )
1538 return builder_instance
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1153 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1154 super().__init__(*args, **kwargs)
1155 # Batch size used by the ArrowWriter
1156 # It defines the number of samples that are kept in memory before writing them
1157 # and also the length of the arrow chunks
1158 # None means that the ArrowWriter will use its default value
1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
305 if info is None:
306 info = self.get_exported_dataset_info()
--> 307 info.update(self._info())
308 info.builder_name = self.name
309 info.config_name = self.config.name
File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)
474 def _info(self):
475
476 # BioASQ Task B source schema
--> 477 if self.config.schema == "source":
478 features = datasets.Features(
479 {
480 "id": datasets.Value("string"),
(...)
504 }
505 )
506 # simplified schema for QA tasks
AttributeError: 'BuilderConfig' object has no attribute 'schema'
```
</details>
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3 | 4,960 |
https://github.com/huggingface/datasets/issues/4958 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py | [
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] | Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
```
| 4,958 |
https://github.com/huggingface/datasets/issues/4955 | Raise a more precise error when the URL is unreachable in streaming mode | [] | See for example:
- https://github.com/huggingface/datasets/issues/3191
- https://github.com/huggingface/datasets/issues/3186
It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:
- https://huggingface.co/datasets/compguesswhat
<img width="1029" alt="Capture d’écran 2022-09-08 à 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png">
- https://huggingface.co/datasets/nli_tr
<img width="1032" alt="Capture d’écran 2022-09-08 à 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png">
cc @albertvillanova | 4,955 |
https://github.com/huggingface/datasets/issues/4953 | CI test of TensorFlow is failing | [] | ## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
def gen_random_output():
model = layers.Dense(2)
x = tf.random.uniform((1, 3))
return model(x).numpy()
with temp_seed(42, set_tensorflow=True):
out1 = gen_random_output()
with temp_seed(42, set_tensorflow=True):
out2 = gen_random_output()
out3 = gen_random_output()
> np.testing.assert_equal(out1, out2)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 0.84619296
E Max relative difference: 16.083529
E x: array([[-0.793581, 0.333286]], dtype=float32)
E y: array([[0.052612, 0.539708]], dtype=float32)
tests/test_py_utils.py:149: AssertionError
```
| 4,953 |
https://github.com/huggingface/datasets/issues/4945 | Push to hub can push splits that do not respect the regex | [] | ## Describe the bug
The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.
## Steps to reproduce the bug
```python
>>> from datasets import Dataset, DatasetDict, load_dataset
>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})
>>> di = DatasetDict()
>>> di['identifier-with-column'] = d
>>> di.push_to_hub('open-source-metrics/test')
Pushing split identifier-with-column to the Hub.
Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it]
```
Loading it afterwards:
```python
>>> load_dataset('open-source-metrics/test')
Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s]
Using custom data configuration open-source-metrics--test-28b63ec7cde80488
Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s]
Traceback (most recent call last):
File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
File "<string>", line 5, in __init__
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__
NamedSplit(self.name) # check that it's a valid split name
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__
raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")
ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'.
```
## Expected results
I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.
## Actual results
See above
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| 4,945 |
https://github.com/huggingface/datasets/issues/4944 | larger dataset, larger GPU memory in the training phase? Is that correct? | [
"does the trainer save it in GPU? sooo curious... how to fix it",
"It's my bad. didn't limit the input length"
] | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1
trainer = QuestionAnsweringTrainer( #huggingface trainer
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset= None,
eval_examples=None,
answer_column_name=answer_column,
dataset_name="squad",
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
with operation 1, the GPU memory increases from 16G to 23G | 4,944 |
https://github.com/huggingface/datasets/issues/4942 | Trec Dataset has incorrect labels | [
"Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."
] | ## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_label (class label) | fine_label (class label)
-- | -- | --
How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist)
What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city)
Who was Galileo ? | 3 (HUM) | 31 (HUM:desc)
What is an atom ? | 2 (DESC) | 24 (DESC:def)
When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date)
## Actual results
index | label-coarse |label-fine | text
-- |-- | -- | --
0 | 4 | 40 | How far is it from Denver to Aspen ?
1 | 5 | 21 | What county is Modesto , California in ?
2 | 3 | 12 | Who was Galileo ?
3 | 0 | 7 | What is an atom ?
4 | 4 | 8 | When did Hawaii become a state ?
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| 4,942 |
https://github.com/huggingface/datasets/issues/4936 | vivos (Vietnamese speech corpus) dataset not accessible | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan om... | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | 4,936 |
https://github.com/huggingface/datasets/issues/4935 | Dataset Viewer issue for ubuntu_dialogs_corpus | [
"The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thank... | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | 4,935 |
https://github.com/huggingface/datasets/issues/4934 | Dataset Viewer issue for indonesian-nlp/librivox-indonesia | [
"The error is not related to the dataset viewer. I'm having a look...",
"Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp/librivox-indonesia\")\r\nNo config specified, defaulting ... | ### Link
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
### Description
I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message:
```
Server error
Status code: 400
Exception: TypeError
Message: unsupported operand type(s) for +: 'NoneType' and 'str'
```
Please help, I am not sure what the problem here is. Thanks a lot.
### Owner
Yes | 4,934 |
https://github.com/huggingface/datasets/issues/4933 | Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. | [
"Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instea... | ## Describe the bug
`Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Steps to reproduce the bug
(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
```python
from datasets import load_dataset
ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead?
ds_mc4_ja_2020 = ds_mc4_ja.filter(
lambda example: example["timestamp"][:4] == "2020",
batched=True,
)
```
## Expected results
No error
## Actual results
```python
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single
offset=offset,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function
indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
TypeError: zip argument #2 must support iteration
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
/tmp/ipykernel_51348/2345782281.py in <module>
7 batched=True,
8 # batch_size=10_000,
----> 9 num_proc=111,
10 )
11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
522 }
523 # apply actual function
--> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
526 # re-apply format to the output
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2920 new_fingerprint=new_fingerprint,
2921 input_columns=input_columns,
-> 2922 desc=desc,
2923 )
2924 new_dataset = copy.deepcopy(self)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2498
2499 for index, async_result in results.items():
-> 2500 transformed_shards[index] = async_result.get()
2501
2502 assert (
/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: zip argument #2 must support iteration
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) | 4,933 |
https://github.com/huggingface/datasets/issues/4932 | Dataset Viewer issue for bigscience-biomedical/biosses | [
"Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets im... | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes | 4,932 |
https://github.com/huggingface/datasets/issues/4924 | Concatenate_datasets loads everything into RAM | [] | ## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```python
gcs = gcsfs.GCSFileSystem(project='project')
datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]
dataset = concatenate_datasets(datasets)
```
## Expected results
A concatenated dataset which is stored on my disk.
## Actual results
Concatenated dataset gets loaded into RAM and overflows it which gets the process killed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.1
- Pandas version: 1.4.3 | 4,924 |
https://github.com/huggingface/datasets/issues/4922 | I/O error on Google Colab in streaming mode | [] | ## Describe the bug
When trying to load a streaming dataset in Google Colab the loading fails with an I/O error
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
list(hf_ds.take(5))
```
## Expected results
It should load five data points
## Actual results
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module>
2 from datasets import load_dataset
3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
----> 4 list(hf_ds.take(5))
6 frames
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
716
717 def __iter__(self):
--> 718 for key, example in self._iter():
719 if self.features:
720 # `IterableDataset` automatically fills missing columns with None.
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self)
706 else:
707 ex_iterable = self._ex_iterable
--> 708 yield from ex_iterable
709
710 def _iter_shard(self, shard_idx: int):
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
582
583 def __iter__(self):
--> 584 yield from islice(self.ex_iterable, self.n)
585
586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable":
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
110
111 def __iter__(self):
--> 112 yield from self.generate_examples_fn(**self.kwargs)
113
114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable":
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)
845 raise ValueError("Invalid number of files: %d" % len(files))
846
--> 847 for sub_key, ex in sub_generator(*sub_generator_args):
848 if not all(ex.values()):
849 continue
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)
923 l2_sentences, l2 = parse_file(f2_i, filename2)
924
--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):
926 key = f"{f_id}/{line_id}"
927 yield key, {l1: s1, l2: s2}
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen()
895
896 def gen():
--> 897 with open(path, encoding="utf-8") as f:
898 for line in f:
899 seg_match = re.match(seg_re, line)
ValueError: I/O operation on closed file.
```
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)
- Pandas version: 1.3.5
| 4,922 |
https://github.com/huggingface/datasets/issues/4920 | Unable to load local tsv files through load_dataset method | [
"Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_... | ## Describe the bug
Unable to load local tsv files through load_dataset method.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
data_files = {
'train': 'train.tsv',
'test': 'test.tsv'
}
raw_datasets = load_dataset('tsv', data_files=data_files)
## Expected results
I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.
## Actual results
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module>
----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')
2 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1246 ) from None
1247 raise e1 from None
1248 else:
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| 4,920 |
https://github.com/huggingface/datasets/issues/4918 | Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines | [
"Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture d’écran 2022-09-05 à 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.p... | ### Link
https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines
### Description
After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.
### Owner
_No response_ | 4,918 |
https://github.com/huggingface/datasets/issues/4917 | Keys mismatch: make error message more informative | [
"Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?",
"Is this open to work ... | **Is your feature request related to a problem? Please describe.**
When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like:
`ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}`
Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset.
**Describe the solution you'd like**
The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`.
Willing to help :)
| 4,917 |
https://github.com/huggingface/datasets/issues/4916 | Apache Beam unable to write the downloaded wikipedia dataset | [
"See:\r\n- #4915"
] | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner')
```
## Expected results
to load the dataset
## Actual results
I am pasting the error trace here:
Downloading builder script: 35.9kB [00:00, ?B/s]
Downloading metadata: 30.4kB [00:00, 1.94MB/s]
Using custom data configuration 20220401.aa-date=20220401,language=aa
Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it]
Traceback (most recent call last):
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/abc/temp.py", line 32, in
beam_runner='DirectRunner')
File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare
pipeline_results = pipeline.run()
File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run
return self.runner.run_pipeline(self, self._options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline
options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages
runner_execution_context, bundle_context_manager, bundle_input)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle
bundle_manager))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle
data_input, data_output, input_timers, expected_timer_output)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push
response = self.worker.do_instruction(request)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction
getattr(request, request_type), request.instruction_id)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle
element.data)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded
self.output(decoded_value)
File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
## Environment info
Python: 3.7.6
Windows 10 Pro
datasets :2.4.0
apache_beam: 2.41.0
mwparserfromhell: 0.6.4 | 4,916 |
https://github.com/huggingface/datasets/issues/4915 | FileNotFoundError while downloading wikipedia dataset for any language | [
"Hi @Shilpac20,\r\n\r\nAs explained in the Wikipedia dataset card: https://huggingface.co/datasets/wikipedia\r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is availabl... | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download.
Environment:
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner')
```
## Expected results
to load the dataset
## Actual results
I am pasting the error trace here:
Downloading builder script: 35.9kB [00:00, ?B/s]
Downloading metadata: 30.4kB [00:00, 1.94MB/s]
Using custom data configuration 20220401.aa-date=20220401,language=aa
Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it]
Traceback (most recent call last):
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/abc/temp.py", line 32, in <module>
beam_runner='DirectRunner')
File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare
pipeline_results = pipeline.run()
File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run
return self.runner.run_pipeline(self, self._options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline
options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages
runner_execution_context, bundle_context_manager, bundle_input)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle
bundle_manager))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle
data_input, data_output, input_timers, expected_timer_output)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push
response = self.worker.do_instruction(request)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction
getattr(request, request_type), request.instruction_id)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle
element.data)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded
self.output(decoded_value)
File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
## Environment info
Python: 3.7.6
Windows 10 Pro
datasets :2.4.0
apache_beam: 2.41.0
mwparserfromhell: 0.6.4
| 4,915 |
https://github.com/huggingface/datasets/issues/4912 | datasets map() handles all data at a stroke and takes long time | [
"Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both op... | **1. Background**
Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop.
The corresponding code:
```
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on every text in dataset"
)
```
**2. The problem**
Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.
Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.
**3. My question**
As described above, my questions are:
* **Which is better? Process in `map()` or in `data-collator`**
* **Why huggingface advises `map()` function?** There should be some advantages to using `map()`
Thanks for your answers! | 4,912 |
https://github.com/huggingface/datasets/issues/4911 | [Tests] Ensure `datasets` supports renamed repositories | [
"You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin "
] | On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well.
However it would be nice to have an integration test to make sure we don't break support for renamed datasets.
To implement this we can use the /api/repos/move endpoint on hub-ci to rename/move a repo (it is documented at https://huggingface.co/docs/hub/api) | 4,911 |
https://github.com/huggingface/datasets/issues/4910 | Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder() | [
"I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0",... | ## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be
```python
builder_cls = import_main_class(dataset_module.module_path)
builder_kwargs = dataset_module.builder_kwargs
data_files = builder_kwargs.pop("data_files", data_files)
config_name = builder_kwargs.pop("config_name", name)
hash = builder_kwargs.pop("hash")
base_path = builder_kwargs.pop("base_path")
```
and then pass base_path into `builder_cls`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
## Expected results
The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder).
So I would expect to be able to pass the base_path into `load_dataset()`.
## Actual results
TypeError("type object got multiple values for keyword argument "base_path").
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- PyArrow version: 9.0.0
| 4,910 |
https://github.com/huggingface/datasets/issues/4907 | None Type error for swda datasets | [
"Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?",
"Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.",
"Ok, let us know ... | ## Describe the bug
I got `'NoneType' object is not callable` error while calling the swda datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("swda")
```
## Expected results
Run without error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Python version: 3.8.10
| 4,907 |
https://github.com/huggingface/datasets/issues/4906 | Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) | [
"Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `data... | ## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.pyplot as plt
import pandas as pd
import sys
import tensorflow as tf
import plotly.express as px
import transformers
import tokenizers
import nlp as nlp
import utils
import datasets
```
## Expected results
A clear and concise description of the expected results.
import should work normal
## Actual results
Specify the actual results or traceback.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-b3b5b0b62103> in <module>
13 import nlp as nlp
14 import utils
---> 15 import datasets
~\anaconda3\lib\site-packages\datasets\__init__.py in <module>
44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
45 from .info import DatasetInfo, MetricInfo
---> 46 from .inspect import (
47 get_dataset_config_info,
48 get_dataset_config_names,
~\anaconda3\lib\site-packages\datasets\inspect.py in <module>
28 from .download.streaming_download_manager import StreamingDownloadManager
29 from .info import DatasetInfo
---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory
31 from .utils.file_utils import relative_to_absolute_path
32 from .utils.logging import get_logger
~\anaconda3\lib\site-packages\datasets\load.py in <module>
53 from .iterable_dataset import IterableDataset
54 from .metric import Metric
---> 55 from .packaged_modules import (
56 _EXTENSION_TO_MODULE,
57 _MODULE_SUPPORTS_METADATA,
~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module>
4 from typing import List
5
----> 6 from .csv import csv
7 from .imagefolder import imagefolder
8 from .json import json
~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module>
13
14
---> 15 logger = datasets.utils.logging.get_logger(__name__)
16
17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"]
AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.8
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| 4,906 |
https://github.com/huggingface/datasets/issues/4902 | Name the default config `default` | [
"Addressed in #5331."
] | Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier to handle to set it to `default`, or another reserved word. | 4,902 |
https://github.com/huggingface/datasets/issues/4900 | Dataset Viewer issue for asaxena1990/Dummy_dataset | [
"Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data con... | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | 4,900 |
https://github.com/huggingface/datasets/issues/4898 | Dataset Viewer issue for timit_asr | [
"Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/data... | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | 4,898 |
https://github.com/huggingface/datasets/issues/4897 | datasets generate large arrow file | [
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 time... | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here. | 4,897 |
https://github.com/huggingface/datasets/issues/4895 | load_dataset method returns Unknown split "validation" even if this dir exists | [
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with t... | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
| 4,895 |
https://github.com/huggingface/datasets/issues/4893 | Oversampling strategy for iterable datasets in `interleave_datasets` | [
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(Exa... | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :) | 4,893 |
https://github.com/huggingface/datasets/issues/4889 | torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3 | [
"Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.",
"torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 deco... | ## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.
```python
#!/usr/bin/env python3
from datasets import load_dataset
import datasets
import numpy as np
import torch
import torchaudio
print("torch vesion", torch.__version__)
print("torchaudio vesion", torchaudio.__version__)
save_audio = True
load_audios = False
if save_audio:
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"])
print(sample["audio"]["array"])
if load_audios:
array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy")
print("Array 11 Shape", array_torch_11.shape)
print("Array 11 abs sum", np.sum(np.abs(array_torch_11)))
array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy")
print("Array 12 Shape", array_torch_12.shape)
print("Array 12 abs sum", np.sum(np.abs(array_torch_12)))
```
Having saved the tensors the print output yields:
```
torch vesion 1.12.1+cu102
torchaudio vesion 0.12.1+cu102
Array 11 Shape (122880,)
Array 11 abs sum 1396.4988
Array 12 Shape (123264,)
Array 12 abs sum 1396.5193
```
## Expected results
torchaudio 11.0 and 12.1 should yield same results.
## Actual results
See above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.1.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| 4,889 |
https://github.com/huggingface/datasets/issues/4888 | Dataset Viewer issue for subjqa | [
"It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.",
"Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture d’écran 2022-09-08 à 10 23 26\" src=\"https://user-images.githubusercontent.com/1... | ### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes | 4,888 |
https://github.com/huggingface/datasets/issues/4886 | Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | [
"Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?",
"Could you put something in place to catch these problems? ... | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd
## Actual results
```
File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module>
dataset = load_dataset('huggan/CelebA-HQ')
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset
builder_instance.download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split
for key, table in logging.tqdm(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.4.1.dev0
- Platform: Ubuntu 18.04
- Python version: 3.10
- PyArrow version: pyarrow 9.0.0
| 4,886 |
https://github.com/huggingface/datasets/issues/4885 | Create dataset from list of dicts | [
"Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementin... | I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear
> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')
Alternatively:
```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```
Which works, but is a little ugly.
**Describe the solution you'd like**
Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.
I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
| 4,885 |
https://github.com/huggingface/datasets/issues/4883 | With dataloader RSS memory consumed by HF datasets monotonically increases | [
"Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measu... | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transformers import BertTokenizer
from datasets import load_dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 32
NUM_TRIES = 10
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def transform(x):
x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True))
x.pop("text")
x.pop("label")
return x
dataset = load_dataset("imdb", split="train")
dataset.set_transform(transform)
train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
count = 0
while count < NUM_TRIES:
for idx, batch in enumerate(train_loader):
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(count, idx, mem_after - mem_before)
count += 1
```
## Expected results
Memory should not increase after initial setup and loading of the dataset
## Actual results
Memory continuously increases as can be seen in the log.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
| 4,883 |
https://github.com/huggingface/datasets/issues/4881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | [
"Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ",
"on the Hub side, there is not fine grained validation we just check that `language:` contains an array of... | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT, | 4,881 |
https://github.com/huggingface/datasets/issues/4878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | [
"Resolved via https://github.com/huggingface/datasets/pull/4937."
] | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored. | 4,878 |
https://github.com/huggingface/datasets/issues/4876 | Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md` | [
"also @osanseviero @Pierrci @SBrandeis potentially",
"Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config... | Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- train-eval-index
- and more
It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have.
One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.
Here is an example for SQuAD
```yaml
download_size: 35142551
dataset_size: 89789763
version: 1.0.0
splits:
- name: train
num_examples: 87599
num_bytes: 79317110
- name: validation
num_examples: 10570
num_bytes: 10472653
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
list:
dtype: string
- name: answer_start
list:
dtype: int32
```
Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax
```yaml
configs:
- config: unlabeled
splits:
- name: train
num_examples: 10000
features:
- name: text
dtype: string
- config: labeled
splits:
- name: train
num_examples: 100
features:
- name: text
dtype: string
- name: label
dtype: ClassLabel
names:
- negative
- positive
```
So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field
Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today
Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :) | 4,876 |
https://github.com/huggingface/datasets/issues/4875 | `_resolve_features` ignores the token | [
"Hi ! Your HF_ENDPOINT seems wrong because of the extra \"/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"/\" ?",
"Oh, yes, sorry, but it's not the issue.\r... | ## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before.
## Steps to reproduce the bug
```python
import os
os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/"
hf_token = "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"
from datasets import load_dataset
# public
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
iterable_dataset = iterable_dataset._resolve_features()
print(iterable_dataset.features)
# gated
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
try:
iterable_dataset = iterable_dataset._resolve_features()
except FileNotFoundError as e:
print("FAILS")
```
## Expected results
I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided.
## Actual results
An exception is thrown on gated datasets.
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | 4,875 |
https://github.com/huggingface/datasets/issues/4873 | Multiple dataloader memory error | [
"Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?",
"Hi @mariosasko, thank you for your reply. I tried pre-concatenating differ... | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch
x = next(iterator)
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__
for batch in super().__iter__():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
data.append(next(self.dataset_iter))
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__
for element in self.dataset:
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__
for key, example in self._iter():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter
yield from ex_iterable
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__
new_key = "_".join(str(key) for key in keys)
MemoryError
``` | 4,873 |
https://github.com/huggingface/datasets/issues/4865 | Dataset Viewer issue for MoritzLaurer/multilingual_nli | [
"Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding ... | ### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test
Do you know why the dataviewer is not working?
### Owner
_No response_ | 4,865 |
https://github.com/huggingface/datasets/issues/4864 | Allow pathlib PoxisPath in Dataset.read_json | [
"This same error will occur using `ds = datasets.load_dataset('json', data_files=['test.jsonl'])`",
"@cccntu I want to make a quick fix for this, but I am struggling to find where the json dataset builder is. Do you know?",
"@vvvm23 I think you mean think:\r\n```python\r\nds = datasets.load_dataset('json', data... | **Is your feature request related to a problem? Please describe.**
```
from pathlib import Path
from datasets import Dataset
ds = Dataset.read_json(Path('data.json'))
```
causes an error
```
AttributeError: 'PosixPath' object has no attribute 'decode'
```
**Describe the solution you'd like**
It should be able to accept PosixPath and read the json from inside. | 4,864 |
https://github.com/huggingface/datasets/issues/4863 | TFDS wiki_dialog dataset to Huggingface dataset | [
"@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..",
"Nvm, I was able to port it to huggingface datasets, will upload to the hub soon",
"https://huggingface.co/datasets/djaym7/wiki_dialog",
"Thanks for the addition, @djay... | ## Adding a Dataset
- **Name:** *Wiki_dialog*
- **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A
- **Paper: https://arxiv.org/abs/2205.09073
- **Data: https://github.com/google-research/dialog-inpainting
- **Motivation:** *Research and Development on biggest corpus of dialog data*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| 4,863 |
https://github.com/huggingface/datasets/issues/4862 | Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code | [
"What's more, the downloaded data is actually a folder instead of an excel file.",
"Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets... | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""
_DATASETNAME = "jadi_ide"
_DESCRIPTION = """\
"""
_HOMEPAGE = ""
_LICENSE = "Unknown"
_URLS = {
_DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx",
}
_SOURCE_VERSION = "1.0.0"
class JaDi_Ide(datasets.GeneratorBasedBuilder):
SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
BUILDER_CONFIGS = [
NusantaraConfig(
name="jadi_ide_source",
version=SOURCE_VERSION,
description="JaDi-Ide source schema",
schema="source",
subset_id="jadi_ide",
),
]
DEFAULT_CONFIG_NAME = "source"
def _info(self) -> datasets.DatasetInfo:
if self.config.schema == "source":
features = datasets.Features(
{
"id": datasets.Value("string"),
"text": datasets.Value("string"),
"label": datasets.Value("string")
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
"""Returns SplitGenerators."""
# Dataset does not have predetermined split, putting all as TRAIN
urls = _URLS[_DATASETNAME]
base_dir = Path(dl_manager.download_and_extract(urls))
data_files = {"train": base_dir}
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": data_files["train"],
"split": "train",
},
),
]
def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:
"""Yields examples as (key, example) tuples."""
df = pd.read_excel(filepath, engine='openpyxl')
df.columns = ["id", "text", "label"]
if self.config.schema == "source":
for row in df.itertuples():
ex = {
"id": str(row.id),
"text": row.text,
"label": row.label,
}
yield row.id, ex
```
## Expected results
Expecting to load the dataset smoothly.
## Actual results
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples
df = pd.read_excel(filepath, engine='openpyxl')
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel
return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)
AttributeError: 'xPath' object has no attribute 'read'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.4
- PyArrow version: 9.0.0
- Pandas version: 0.25.1
| 4,862 |
https://github.com/huggingface/datasets/issues/4861 | Using disk for memory with the method `from_dict` | [
"This issue was also causing an OOM in @nateraw 's workflow and shows again that behavior is confusing - we should definitely switch to using the disk IMO"
] | **Is your feature request related to a problem? Please describe.**
I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error.
**Describe the solution you'd like**
The method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead.
**Describe alternatives you've considered**
To solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.
| 4,861 |
https://github.com/huggingface/datasets/issues/4859 | can't install using conda on Windows 10 | [] | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
| 4,859 |
https://github.com/huggingface/datasets/issues/4858 | map() function removes columns when input_columns is not None | [
"Hi! Thanks for reporting! This looks like a bug. I've just opened a PR with the fix.",
"Awesome! Thank you. I'll close the issue once the PR gets merged. :-)",
"I guess we should reopen after the revert by:\r\n- #5006"
] | ## Describe the bug
The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"a" : [1,2,3],"b" : [0,1,0], "c" : [2,4,5]})
def double(x,y):
x = x*2
y = y*2
return {"d" : x, "e" : y}
ds.map(double, input_columns=["a","c"])
```
## Expected results
```
Dataset({
features: ['a', 'b', 'c', 'd', 'e'],
num_rows: 3
})
```
## Actual results
```
Dataset({
features: ['a', 'c', 'd', 'e'],
num_rows: 3
})
```
In this specific example feature **b** should not be removed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: linux (colab)
- Python version: 3.7.13
- PyArrow version: 6.0.1
| 4,858 |
https://github.com/huggingface/datasets/issues/4857 | No preprocessed wikipedia is working on huggingface/datasets | [
"Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https://huggingface.co/datasets/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ",
"This is working now, but I was getting an error... | ## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
| 4,857 |
https://github.com/huggingface/datasets/issues/4856 | file missing when load_dataset with openwebtext on windows | [
"I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd... | ## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
| 4,856 |
https://github.com/huggingface/datasets/issues/4855 | Dataset Viewer issue for super_glue | [
"Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https://huggingface.co/datasets/super_glue"
] | ### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_ | 4,855 |
https://github.com/huggingface/datasets/issues/4852 | Bug in multilingual_with_para config of exams dataset and checksums error | [
"Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?",
"Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you ca... | ## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz']
```
CC: @thesofakillers | 4,852 |
https://github.com/huggingface/datasets/issues/4840 | Dataset Viewer issue for darragh/demo_data_raw3 | [
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\... | ### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No | 4,840 |
https://github.com/huggingface/datasets/issues/4839 | ImageFolder dataset builder does not read the validation data set if it is named as "val" | [
"#take"
] | **Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported.
Here's a minimal example of `val` not being recognized:
```python
import os
import numpy as np
import cv2
from datasets import load_dataset
# creating a dummy data set with the following structure:
# ROOT
# | -- train
# | ---- class_1
# | ---- class_2
# | -- val
# | ---- class_1
# | ---- class_2
ROOT = "data"
for which in ["train", "val"]:
for class_name in ["class_1", "class_2"]:
dir_name = os.path.join(ROOT, which, class_name)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
for i in range(10):
cv2.imwrite(
os.path.join(dir_name, f"{i}.png"),
np.random.random((224, 224))
)
# trying to create a data set
dataset = load_dataset(
"imagefolder",
data_dir=ROOT
)
>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 20
})
})
# ^ note how the dataset only has a 'train' subset
```
**Describe the solution you'd like**
The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory.
Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion.
**Describe alternatives you've considered**
In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list.
**Additional context**
A question asked in the forum: [
Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554) | 4,839 |
https://github.com/huggingface/datasets/issues/4836 | Is it possible to pass multiple links to a split in load script? | [] | **Is your feature request related to a problem? Please describe.**
I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I assumed I could do something like bellow in my loading script:
```python
...
_URL = "MY_DATASET_URL/resolve/main/data/"
_URLS = {
"train": [
"FIRST_URL_TO.txt",
_URL + "train-00000-of-00001-676bfebbc8742592.parquet"
]
}
...
```
but when loading the dataset it raises the following error:
```python
File ~/.local/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
...
668 if isinstance(a, str):
669 # Force-cast str subclasses to str (issue #21127)
670 parts.append(str(a))
TypeError: expected str, bytes or os.PathLike object, not list
```
**Describe the solution you'd like**
I believe since it's possible for `load_dataset` to get list of URLs instead of just a URL for `train` split it can be possible here too.
**Describe alternatives you've considered**
An alternative solution would be to download all needed datasets locally and `push_to_hub` them all, but since the datasets I'm talking about are huge it's not among my options.
**Additional context**
I think loading `text` beside the `parquet` is completely a different issue but I believe I can figure it out by proposing a config for my dataset to load each entry of `_URLS['train']` separately either by `load_dataset("text", ...` or `load_dataset("parquet", ...`.
| 4,836 |
https://github.com/huggingface/datasets/issues/4829 | Misalignment between card tag validation and docs | [
"(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)",
"Instead of our own implementation, we now use `huggingface_hub`'s `DatasetCardData`, which has the correct type hint, so I think... | ## Describe the bug
As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284
the validation of the dataset card tags is not aligned with its documentation: e.g.
- implementation: `license: List[str]`
- docs: `license: Union[str, List[str]]`
They should be aligned.
CC: @julien-c
| 4,829 |
https://github.com/huggingface/datasets/issues/4820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | [
"Fixed by installing either resampy<3 or resampy>=4"
] | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| 4,820 |
https://github.com/huggingface/datasets/issues/4817 | Outdated Link for mkqa Dataset | [
"Thanks for reporting @liaeh, we are investigating this. "
] | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
| 4,817 |
https://github.com/huggingface/datasets/issues/4815 | Outdated loading script for OPUS ParaCrawl dataset | [] | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| 4,815 |
https://github.com/huggingface/datasets/issues/4814 | Support CSV as metadata file format in AudioFolder/ImageFolder | [] | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | 4,814 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.