id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,352,539,075
| 4,903
|
Fix CI reporting
|
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
|
closed
|
https://github.com/huggingface/datasets/pull/4903
| 2022-08-26T17:16:30
| 2022-08-26T17:49:33
| 2022-08-26T17:46:59
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,352,469,196
| 4,902
|
Name the default config `default`
|
Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier to handle to set it to `default`, or another reserved word.
|
closed
|
https://github.com/huggingface/datasets/issues/4902
| 2022-08-26T16:16:22
| 2023-07-24T21:15:31
| 2023-07-24T21:15:31
|
{
"login": "severo",
"id": 1676121,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false
|
[] |
1,352,438,915
| 4,901
|
Raise ManualDownloadError from get_dataset_config_info
|
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo
|
closed
|
https://github.com/huggingface/datasets/pull/4901
| 2022-08-26T15:45:56
| 2022-08-30T10:42:21
| 2022-08-30T10:40:04
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,352,405,855
| 4,900
|
Dataset Viewer issue for asaxena1990/Dummy_dataset
|
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
closed
|
https://github.com/huggingface/datasets/issues/4900
| 2022-08-26T15:15:44
| 2023-07-24T15:42:09
| 2023-07-24T15:42:09
|
{
"login": "ankurcl",
"id": 56627657,
"type": "User"
}
|
[] | false
|
[] |
1,352,031,286
| 4,899
|
Re-add code and und language tags
|
This PR fixes the removal of 2 language tags done by:
- #4882
The tags are:
- "code": this is not a IANA tag but needed
- "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af
- used in "mc4" and "udhr" datasets
|
closed
|
https://github.com/huggingface/datasets/pull/4899
| 2022-08-26T09:48:57
| 2022-08-26T10:27:18
| 2022-08-26T10:24:20
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,351,851,254
| 4,898
|
Dataset Viewer issue for timit_asr
|
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
closed
|
https://github.com/huggingface/datasets/issues/4898
| 2022-08-26T07:12:05
| 2022-10-03T12:40:28
| 2022-10-03T12:40:27
|
{
"login": "InayatUllah932",
"id": 91126978,
"type": "User"
}
|
[] | false
|
[] |
1,351,784,727
| 4,897
|
datasets generate large arrow file
|
Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
|
closed
|
https://github.com/huggingface/datasets/issues/4897
| 2022-08-26T05:51:16
| 2022-09-18T05:07:52
| 2022-09-18T05:07:52
|
{
"login": "jax11235",
"id": 18533904,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,351,180,409
| 4,896
|
Fix missing tags in dataset cards
|
Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
|
closed
|
https://github.com/huggingface/datasets/pull/4896
| 2022-08-25T16:41:43
| 2022-09-22T14:37:16
| 2022-08-26T04:41:48
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,350,798,527
| 4,895
|
load_dataset method returns Unknown split "validation" even if this dir exists
|
## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
|
closed
|
https://github.com/huggingface/datasets/issues/4895
| 2022-08-25T12:11:00
| 2024-03-26T16:47:48
| 2022-09-29T08:07:50
|
{
"login": "SamSamhuns",
"id": 13418507,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,350,667,270
| 4,894
|
Add citation information to makhzan dataset
|
This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43
|
closed
|
https://github.com/huggingface/datasets/pull/4894
| 2022-08-25T10:16:40
| 2022-08-30T06:21:54
| 2022-08-25T13:19:41
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,350,655,674
| 4,893
|
Oversampling strategy for iterable datasets in `interleave_datasets`
|
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :)
|
closed
|
https://github.com/huggingface/datasets/issues/4893
| 2022-08-25T10:06:55
| 2022-10-03T12:37:46
| 2022-10-03T12:37:46
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[
{
"name": "good second issue",
"color": "BDE59C"
}
] | false
|
[] |
1,350,636,499
| 4,892
|
Add citation to ro_sts and ro_sts_parallel datasets
|
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4
|
closed
|
https://github.com/huggingface/datasets/pull/4892
| 2022-08-25T09:51:06
| 2022-08-25T10:49:56
| 2022-08-25T10:49:56
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,350,589,813
| 4,891
|
Fix missing tags in dataset cards
|
Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
|
closed
|
https://github.com/huggingface/datasets/pull/4891
| 2022-08-25T09:14:17
| 2022-09-22T14:39:02
| 2022-08-25T13:43:34
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,350,578,029
| 4,890
|
add Dataset.from_list
|
As discussed in #4885
I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict.
However, it seems the constructor takes care of filling info when it is empty.
```
if info.features is None:
info.features = Features(
{
col: generate_from_arrow_type(coldata.type)
for col, coldata in zip(pa_table.column_names, pa_table.columns)
}
)
```
|
closed
|
https://github.com/huggingface/datasets/pull/4890
| 2022-08-25T09:05:58
| 2022-09-02T10:22:59
| 2022-09-02T10:20:33
|
{
"login": "sanderland",
"id": 48946947,
"type": "User"
}
|
[] | true
|
[] |
1,349,758,525
| 4,889
|
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
|
## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.
```python
#!/usr/bin/env python3
from datasets import load_dataset
import datasets
import numpy as np
import torch
import torchaudio
print("torch vesion", torch.__version__)
print("torchaudio vesion", torchaudio.__version__)
save_audio = True
load_audios = False
if save_audio:
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"])
print(sample["audio"]["array"])
if load_audios:
array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy")
print("Array 11 Shape", array_torch_11.shape)
print("Array 11 abs sum", np.sum(np.abs(array_torch_11)))
array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy")
print("Array 12 Shape", array_torch_12.shape)
print("Array 12 abs sum", np.sum(np.abs(array_torch_12)))
```
Having saved the tensors the print output yields:
```
torch vesion 1.12.1+cu102
torchaudio vesion 0.12.1+cu102
Array 11 Shape (122880,)
Array 11 abs sum 1396.4988
Array 12 Shape (123264,)
Array 12 abs sum 1396.5193
```
## Expected results
torchaudio 11.0 and 12.1 should yield same results.
## Actual results
See above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.1.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
|
closed
|
https://github.com/huggingface/datasets/issues/4889
| 2022-08-24T16:54:43
| 2023-03-02T15:33:05
| 2023-03-02T15:33:04
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,349,447,521
| 4,888
|
Dataset Viewer issue for subjqa
|
### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4888
| 2022-08-24T13:26:20
| 2022-09-08T08:23:42
| 2022-09-08T08:23:42
|
{
"login": "lewtun",
"id": 26859204,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,349,426,693
| 4,887
|
Add "cc-by-nc-sa-2.0" to list of licenses
|
Datasets side of https://github.com/huggingface/hub-docs/pull/285
|
closed
|
https://github.com/huggingface/datasets/pull/4887
| 2022-08-24T13:11:49
| 2022-08-26T10:31:32
| 2022-08-26T10:29:20
|
{
"login": "osanseviero",
"id": 7246357,
"type": "User"
}
|
[] | true
|
[] |
1,349,285,569
| 4,886
|
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
|
## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd
## Actual results
```
File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module>
dataset = load_dataset('huggan/CelebA-HQ')
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset
builder_instance.download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split
for key, table in logging.tqdm(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.4.1.dev0
- Platform: Ubuntu 18.04
- Python version: 3.10
- PyArrow version: pyarrow 9.0.0
|
open
|
https://github.com/huggingface/datasets/issues/4886
| 2022-08-24T11:24:21
| 2023-02-02T02:40:53
| null |
{
"login": "JeanKaddour",
"id": 11850255,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,349,181,448
| 4,885
|
Create dataset from list of dicts
|
I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear
> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')
Alternatively:
```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```
Which works, but is a little ugly.
**Describe the solution you'd like**
Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.
I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
|
closed
|
https://github.com/huggingface/datasets/issues/4885
| 2022-08-24T10:01:24
| 2022-09-08T16:02:52
| 2022-09-08T16:02:52
|
{
"login": "sanderland",
"id": 48946947,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,349,105,946
| 4,884
|
Fix documentation card of math_qa dataset
|
Fix documentation card of math_qa dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4884
| 2022-08-24T09:00:56
| 2022-08-24T11:33:17
| 2022-08-24T11:33:16
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,349,083,235
| 4,883
|
With dataloader RSS memory consumed by HF datasets monotonically increases
|
## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transformers import BertTokenizer
from datasets import load_dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 32
NUM_TRIES = 10
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def transform(x):
x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True))
x.pop("text")
x.pop("label")
return x
dataset = load_dataset("imdb", split="train")
dataset.set_transform(transform)
train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
count = 0
while count < NUM_TRIES:
for idx, batch in enumerate(train_loader):
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(count, idx, mem_after - mem_before)
count += 1
```
## Expected results
Memory should not increase after initial setup and loading of the dataset
## Actual results
Memory continuously increases as can be seen in the log.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
|
open
|
https://github.com/huggingface/datasets/issues/4883
| 2022-08-24T08:42:54
| 2024-01-23T12:42:40
| null |
{
"login": "apsdehal",
"id": 3616806,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,348,913,665
| 4,882
|
Fix language tags resource file
|
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753
|
closed
|
https://github.com/huggingface/datasets/pull/4882
| 2022-08-24T06:06:01
| 2022-08-24T13:58:33
| 2022-08-24T13:58:30
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,348,495,777
| 4,881
|
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
|
**The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT,
|
open
|
https://github.com/huggingface/datasets/issues/4881
| 2022-08-23T20:14:24
| 2024-04-22T15:57:28
| null |
{
"login": "alexis-michaud",
"id": 6072524,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,348,452,776
| 4,880
|
Added names of less-studied languages
|
Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets.
|
closed
|
https://github.com/huggingface/datasets/pull/4880
| 2022-08-23T19:32:38
| 2022-08-24T12:52:46
| 2022-08-24T12:52:46
|
{
"login": "BenjaminGalliot",
"id": 23100612,
"type": "User"
}
|
[] | true
|
[] |
1,348,346,407
| 4,879
|
Fix Citation Information section in dataset cards
|
Fix Citation Information section in dataset cards:
- cc_news
- conllpp
- datacommons_factcheck
- gnad10
- id_panl_bppt
- jigsaw_toxicity_pred
- kinnews_kirnews
- kor_sarcasm
- makhzan
- reasoning_bg
- ro_sts
- ro_sts_parallel
- sanskrit_classic
- telugu_news
- thaiqa_squad
- wiki_movies
This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.
|
closed
|
https://github.com/huggingface/datasets/pull/4879
| 2022-08-23T18:06:43
| 2022-09-27T14:04:45
| 2022-08-24T04:09:07
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,348,270,141
| 4,878
|
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`
|
In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored.
|
closed
|
https://github.com/huggingface/datasets/issues/4878
| 2022-08-23T17:09:55
| 2022-09-13T14:00:06
| 2022-09-13T14:00:05
|
{
"login": "severo",
"id": 1676121,
"type": "User"
}
|
[
{
"name": "help wanted",
"color": "008672"
},
{
"name": "question",
"color": "d876e3"
}
] | false
|
[] |
1,348,246,755
| 4,877
|
Fix documentation card of covid_qa_castorini dataset
|
Fix documentation card of covid_qa_castorini dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4877
| 2022-08-23T16:52:33
| 2022-08-23T18:05:01
| 2022-08-23T18:05:00
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,348,202,678
| 4,876
|
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
|
Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- train-eval-index
- and more
It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have.
One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.
Here is an example for SQuAD
```yaml
download_size: 35142551
dataset_size: 89789763
version: 1.0.0
splits:
- name: train
num_examples: 87599
num_bytes: 79317110
- name: validation
num_examples: 10570
num_bytes: 10472653
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
list:
dtype: string
- name: answer_start
list:
dtype: int32
```
Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax
```yaml
configs:
- config: unlabeled
splits:
- name: train
num_examples: 10000
features:
- name: text
dtype: string
- config: labeled
splits:
- name: train
num_examples: 100
features:
- name: text
dtype: string
- name: label
dtype: ClassLabel
names:
- negative
- positive
```
So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field
Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today
Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)
|
closed
|
https://github.com/huggingface/datasets/issues/4876
| 2022-08-23T16:16:41
| 2022-10-03T09:11:13
| 2022-10-03T09:11:13
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | false
|
[] |
1,348,095,686
| 4,875
|
`_resolve_features` ignores the token
|
## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before.
## Steps to reproduce the bug
```python
import os
os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/"
hf_token = "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"
from datasets import load_dataset
# public
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
iterable_dataset = iterable_dataset._resolve_features()
print(iterable_dataset.features)
# gated
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
try:
iterable_dataset = iterable_dataset._resolve_features()
except FileNotFoundError as e:
print("FAILS")
```
## Expected results
I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided.
## Actual results
An exception is thrown on gated datasets.
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
|
open
|
https://github.com/huggingface/datasets/issues/4875
| 2022-08-23T14:57:36
| 2022-10-17T13:45:47
| null |
{
"login": "severo",
"id": 1676121,
"type": "User"
}
|
[] | false
|
[] |
1,347,618,197
| 4,874
|
[docs] Some tiny doc tweaks
| null |
closed
|
https://github.com/huggingface/datasets/pull/4874
| 2022-08-23T09:19:40
| 2022-08-24T17:27:57
| 2022-08-24T17:27:56
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[] | true
|
[] |
1,347,592,022
| 4,873
|
Multiple dataloader memory error
|
For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch
x = next(iterator)
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__
for batch in super().__iter__():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
data.append(next(self.dataset_iter))
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__
for element in self.dataset:
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__
for key, example in self._iter():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter
yield from ex_iterable
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__
new_key = "_".join(str(key) for key in keys)
MemoryError
```
|
open
|
https://github.com/huggingface/datasets/issues/4873
| 2022-08-23T08:59:50
| 2023-01-26T02:01:11
| null |
{
"login": "cyk1337",
"id": 13767887,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,347,180,765
| 4,872
|
Docs for creating an audio dataset
|
This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂
|
closed
|
https://github.com/huggingface/datasets/pull/4872
| 2022-08-23T01:07:09
| 2022-09-22T17:19:13
| 2022-09-21T10:27:04
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "documentation",
"color": "0075ca"
}
] | true
|
[] |
1,346,703,568
| 4,871
|
Fix: wmt datasets - fix CWMT zh subsets
|
Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19
|
closed
|
https://github.com/huggingface/datasets/pull/4871
| 2022-08-22T16:42:09
| 2022-08-23T10:00:20
| 2022-08-23T10:00:19
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,346,160,498
| 4,870
|
audio folder check CI
| null |
closed
|
https://github.com/huggingface/datasets/pull/4870
| 2022-08-22T10:15:53
| 2022-11-02T11:54:35
| 2022-08-22T12:19:40
|
{
"login": "polinaeterna",
"id": 16348744,
"type": "User"
}
|
[] | true
|
[] |
1,345,513,758
| 4,869
|
Fix typos in documentation
| null |
closed
|
https://github.com/huggingface/datasets/pull/4869
| 2022-08-21T15:10:03
| 2022-08-22T09:25:39
| 2022-08-22T09:09:58
|
{
"login": "fl-lo",
"id": 85993954,
"type": "User"
}
|
[] | true
|
[] |
1,345,191,322
| 4,868
|
adding mafand to datasets
|
I'm addding the MAFAND dataset by Masakhane based on the paper/repository below:
Paper: https://aclanthology.org/2022.naacl-main.223/
Code: https://github.com/masakhane-io/lafand-mt
Please, help merge this
Everything works except for creating dummy data file
|
closed
|
https://github.com/huggingface/datasets/pull/4868
| 2022-08-20T15:26:14
| 2022-08-22T11:00:50
| 2022-08-22T08:52:23
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[
{
"name": "wontfix",
"color": "ffffff"
}
] | true
|
[] |
1,344,982,646
| 4,867
|
Complete tags of superglue dataset card
|
Related to #4479 .
|
closed
|
https://github.com/huggingface/datasets/pull/4867
| 2022-08-19T23:44:39
| 2022-08-22T09:14:03
| 2022-08-22T08:58:31
|
{
"login": "richarddwang",
"id": 17963619,
"type": "User"
}
|
[] | true
|
[] |
1,344,809,132
| 4,866
|
amend docstring for dunder
|
display dunder method in docsting with underlines an not bold markdown.
|
open
|
https://github.com/huggingface/datasets/pull/4866
| 2022-08-19T19:09:15
| 2022-09-09T16:33:11
| null |
{
"login": "schafsam",
"id": 37704298,
"type": "User"
}
|
[] | true
|
[] |
1,344,552,626
| 4,865
|
Dataset Viewer issue for MoritzLaurer/multilingual_nli
|
### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test
Do you know why the dataviewer is not working?
### Owner
_No response_
|
closed
|
https://github.com/huggingface/datasets/issues/4865
| 2022-08-19T14:55:20
| 2022-08-22T14:47:14
| 2022-08-22T06:13:20
|
{
"login": "MoritzLaurer",
"id": 41862082,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,344,410,043
| 4,864
|
Allow pathlib PoxisPath in Dataset.read_json
|
**Is your feature request related to a problem? Please describe.**
```
from pathlib import Path
from datasets import Dataset
ds = Dataset.read_json(Path('data.json'))
```
causes an error
```
AttributeError: 'PosixPath' object has no attribute 'decode'
```
**Describe the solution you'd like**
It should be able to accept PosixPath and read the json from inside.
|
open
|
https://github.com/huggingface/datasets/issues/4864
| 2022-08-19T12:59:17
| 2025-04-11T17:22:48
| null |
{
"login": "changjonathanc",
"id": 31893406,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,343,737,668
| 4,863
|
TFDS wiki_dialog dataset to Huggingface dataset
|
## Adding a Dataset
- **Name:** *Wiki_dialog*
- **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A
- **Paper: https://arxiv.org/abs/2205.09073
- **Data: https://github.com/google-research/dialog-inpainting
- **Motivation:** *Research and Development on biggest corpus of dialog data*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
|
closed
|
https://github.com/huggingface/datasets/issues/4863
| 2022-08-18T23:06:30
| 2022-08-22T09:41:45
| 2022-08-22T05:18:53
|
{
"login": "djaym7",
"id": 12378820,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
1,343,464,699
| 4,862
|
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
|
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""
_DATASETNAME = "jadi_ide"
_DESCRIPTION = """\
"""
_HOMEPAGE = ""
_LICENSE = "Unknown"
_URLS = {
_DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx",
}
_SOURCE_VERSION = "1.0.0"
class JaDi_Ide(datasets.GeneratorBasedBuilder):
SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
BUILDER_CONFIGS = [
NusantaraConfig(
name="jadi_ide_source",
version=SOURCE_VERSION,
description="JaDi-Ide source schema",
schema="source",
subset_id="jadi_ide",
),
]
DEFAULT_CONFIG_NAME = "source"
def _info(self) -> datasets.DatasetInfo:
if self.config.schema == "source":
features = datasets.Features(
{
"id": datasets.Value("string"),
"text": datasets.Value("string"),
"label": datasets.Value("string")
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
"""Returns SplitGenerators."""
# Dataset does not have predetermined split, putting all as TRAIN
urls = _URLS[_DATASETNAME]
base_dir = Path(dl_manager.download_and_extract(urls))
data_files = {"train": base_dir}
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": data_files["train"],
"split": "train",
},
),
]
def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:
"""Yields examples as (key, example) tuples."""
df = pd.read_excel(filepath, engine='openpyxl')
df.columns = ["id", "text", "label"]
if self.config.schema == "source":
for row in df.itertuples():
ex = {
"id": str(row.id),
"text": row.text,
"label": row.label,
}
yield row.id, ex
```
## Expected results
Expecting to load the dataset smoothly.
## Actual results
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples
df = pd.read_excel(filepath, engine='openpyxl')
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel
return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)
AttributeError: 'xPath' object has no attribute 'read'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.4
- PyArrow version: 9.0.0
- Pandas version: 0.25.1
|
closed
|
https://github.com/huggingface/datasets/issues/4862
| 2022-08-18T18:36:14
| 2022-08-31T09:25:08
| 2022-08-31T09:25:08
|
{
"login": "yana-xuyan",
"id": 38536635,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,343,260,220
| 4,861
|
Using disk for memory with the method `from_dict`
|
**Is your feature request related to a problem? Please describe.**
I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error.
**Describe the solution you'd like**
The method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead.
**Describe alternatives you've considered**
To solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.
|
open
|
https://github.com/huggingface/datasets/issues/4861
| 2022-08-18T15:18:18
| 2023-01-26T18:36:28
| null |
{
"login": "HugoLaurencon",
"id": 44556846,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,342,311,540
| 4,860
|
Add collection3 dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4860
| 2022-08-17T21:31:42
| 2022-08-23T20:02:45
| 2022-08-22T09:08:59
|
{
"login": "pefimov",
"id": 16446994,
"type": "User"
}
|
[
{
"name": "wontfix",
"color": "ffffff"
}
] | true
|
[] |
1,342,231,016
| 4,859
|
can't install using conda on Windows 10
|
## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
|
open
|
https://github.com/huggingface/datasets/issues/4859
| 2022-08-17T19:57:37
| 2022-08-17T19:57:37
| null |
{
"login": "xoffey",
"id": 22627691,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,340,859,853
| 4,858
|
map() function removes columns when input_columns is not None
|
## Describe the bug
The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"a" : [1,2,3],"b" : [0,1,0], "c" : [2,4,5]})
def double(x,y):
x = x*2
y = y*2
return {"d" : x, "e" : y}
ds.map(double, input_columns=["a","c"])
```
## Expected results
```
Dataset({
features: ['a', 'b', 'c', 'd', 'e'],
num_rows: 3
})
```
## Actual results
```
Dataset({
features: ['a', 'c', 'd', 'e'],
num_rows: 3
})
```
In this specific example feature **b** should not be removed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: linux (colab)
- Python version: 3.7.13
- PyArrow version: 6.0.1
|
closed
|
https://github.com/huggingface/datasets/issues/4858
| 2022-08-16T20:42:30
| 2022-09-22T13:55:24
| 2022-09-22T13:55:24
|
{
"login": "pramodith",
"id": 16939722,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,340,397,153
| 4,857
|
No preprocessed wikipedia is working on huggingface/datasets
|
## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
|
closed
|
https://github.com/huggingface/datasets/issues/4857
| 2022-08-16T13:55:33
| 2022-08-17T13:35:08
| 2022-08-17T13:35:08
|
{
"login": "aninrusimha",
"id": 30733039,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,339,779,957
| 4,856
|
file missing when load_dataset with openwebtext on windows
|
## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
|
closed
|
https://github.com/huggingface/datasets/issues/4856
| 2022-08-16T04:04:22
| 2023-01-04T03:39:12
| 2023-01-04T03:39:12
|
{
"login": "xi-loong",
"id": 10361976,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,339,699,975
| 4,855
|
Dataset Viewer issue for super_glue
|
### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_
|
closed
|
https://github.com/huggingface/datasets/issues/4855
| 2022-08-16T01:34:56
| 2022-08-22T10:08:01
| 2022-08-22T10:07:45
|
{
"login": "wzsxxa",
"id": 54366859,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,339,456,490
| 4,853
|
Fix bug and checksums in exams dataset
|
Fix #4852.
|
closed
|
https://github.com/huggingface/datasets/pull/4853
| 2022-08-15T20:17:57
| 2022-08-16T06:43:57
| 2022-08-16T06:29:06
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,339,450,991
| 4,852
|
Bug in multilingual_with_para config of exams dataset and checksums error
|
## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz']
```
CC: @thesofakillers
|
closed
|
https://github.com/huggingface/datasets/issues/4852
| 2022-08-15T20:14:52
| 2022-09-16T09:50:55
| 2022-08-16T06:29:07
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,339,085,917
| 4,851
|
Fix license tag and Source Data section in billsum dataset card
|
Fixed the data source and license fields
|
closed
|
https://github.com/huggingface/datasets/pull/4851
| 2022-08-15T14:37:00
| 2022-08-22T13:56:24
| 2022-08-22T13:40:59
|
{
"login": "kashif",
"id": 8100,
"type": "User"
}
|
[] | true
|
[] |
1,338,702,306
| 4,850
|
Fix test of _get_extraction_protocol for TAR files
|
While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar]
```
This PR:
- refactors the test so that it tests the raise of the exceptions instead of xfailing
- fixes the test for TAR files: it does not raise an exception, but returns "tar"
- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive`
|
closed
|
https://github.com/huggingface/datasets/pull/4850
| 2022-08-15T08:37:58
| 2022-08-15T09:42:56
| 2022-08-15T09:28:46
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,338,273,900
| 4,849
|
1.18.x
| null |
closed
|
https://github.com/huggingface/datasets/pull/4849
| 2022-08-14T15:09:19
| 2022-08-14T15:10:02
| 2022-08-14T15:10:02
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
}
|
[] | true
|
[] |
1,338,271,833
| 4,848
|
a
| null |
closed
|
https://github.com/huggingface/datasets/pull/4848
| 2022-08-14T15:01:16
| 2022-08-14T15:09:59
| 2022-08-14T15:09:59
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
}
|
[] | true
|
[] |
1,338,270,636
| 4,847
|
Test win ci
|
aa
|
closed
|
https://github.com/huggingface/datasets/pull/4847
| 2022-08-14T14:57:00
| 2023-09-24T10:04:13
| 2022-08-14T14:57:45
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
}
|
[] | true
|
[] |
1,337,979,897
| 4,846
|
Update documentation card of miam dataset
|
Hi !
Paper has been published at EMNLP.
|
closed
|
https://github.com/huggingface/datasets/pull/4846
| 2022-08-13T14:38:55
| 2022-08-17T00:50:04
| 2022-08-14T10:26:08
|
{
"login": "PierreColombo",
"id": 22492839,
"type": "User"
}
|
[] | true
|
[] |
1,337,928,283
| 4,845
|
Mark CI tests as xfail if Hub HTTP error
|
In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.
This PR:
- marks tests as xfailed only if the Hub raises a 500 error for:
- test_upstream_hub
- makes pytest report the xfailed/xpassed tests.
More tests could also be marked if needed.
Examples of CI failures due to temporary Hub HTTP errors:
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
- https://github.com/huggingface/datasets/runs/7806855399?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-16603108028233/commit/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token
- https://github.com/huggingface/datasets/runs/7840022996?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://s3.us-east-1.amazonaws.com/lfs-staging.huggingface.co/repos/81/e3/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
- https://github.com/huggingface/datasets/runs/7835921082?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list
- https://github.com/huggingface/datasets/runs/7835920900?check_suite_focus=true
- This is not 500, but 404:
`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects](https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects/batch)`
|
closed
|
https://github.com/huggingface/datasets/pull/4845
| 2022-08-13T10:45:11
| 2022-08-23T04:57:12
| 2022-08-23T04:42:26
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,337,878,249
| 4,844
|
Add 'val' to VALIDATION_KEYWORDS.
|
This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that.
|
closed
|
https://github.com/huggingface/datasets/pull/4844
| 2022-08-13T06:49:41
| 2022-08-30T10:17:35
| 2022-08-30T10:14:54
|
{
"login": "akt42",
"id": 98386959,
"type": "User"
}
|
[] | true
|
[] |
1,337,668,699
| 4,843
|
Fix typo in streaming docs
| null |
closed
|
https://github.com/huggingface/datasets/pull/4843
| 2022-08-12T20:18:21
| 2022-08-14T11:43:30
| 2022-08-14T11:02:09
|
{
"login": "flozi00",
"id": 47894090,
"type": "User"
}
|
[] | true
|
[] |
1,337,527,764
| 4,842
|
Update stackexchange license
|
The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing
|
closed
|
https://github.com/huggingface/datasets/pull/4842
| 2022-08-12T17:39:06
| 2022-08-14T10:43:18
| 2022-08-14T10:28:49
|
{
"login": "cakiki",
"id": 3664563,
"type": "User"
}
|
[] | true
|
[] |
1,337,401,243
| 4,841
|
Update ted_talks_iwslt license to include ND
|
Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community"
|
closed
|
https://github.com/huggingface/datasets/pull/4841
| 2022-08-12T16:14:52
| 2022-08-14T11:15:22
| 2022-08-14T11:00:22
|
{
"login": "cakiki",
"id": 3664563,
"type": "User"
}
|
[] | true
|
[] |
1,337,342,672
| 4,840
|
Dataset Viewer issue for darragh/demo_data_raw3
|
### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No
|
open
|
https://github.com/huggingface/datasets/issues/4840
| 2022-08-12T15:22:58
| 2022-09-08T07:55:44
| null |
{
"login": "severo",
"id": 1676121,
"type": "User"
}
|
[] | false
|
[] |
1,337,206,377
| 4,839
|
ImageFolder dataset builder does not read the validation data set if it is named as "val"
|
**Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported.
Here's a minimal example of `val` not being recognized:
```python
import os
import numpy as np
import cv2
from datasets import load_dataset
# creating a dummy data set with the following structure:
# ROOT
# | -- train
# | ---- class_1
# | ---- class_2
# | -- val
# | ---- class_1
# | ---- class_2
ROOT = "data"
for which in ["train", "val"]:
for class_name in ["class_1", "class_2"]:
dir_name = os.path.join(ROOT, which, class_name)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
for i in range(10):
cv2.imwrite(
os.path.join(dir_name, f"{i}.png"),
np.random.random((224, 224))
)
# trying to create a data set
dataset = load_dataset(
"imagefolder",
data_dir=ROOT
)
>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 20
})
})
# ^ note how the dataset only has a 'train' subset
```
**Describe the solution you'd like**
The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory.
Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion.
**Describe alternatives you've considered**
In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list.
**Additional context**
A question asked in the forum: [
Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554)
|
closed
|
https://github.com/huggingface/datasets/issues/4839
| 2022-08-12T13:26:00
| 2022-08-30T10:14:55
| 2022-08-30T10:14:55
|
{
"login": "akt42",
"id": 98386959,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,337,194,918
| 4,838
|
Fix documentation card of adv_glue dataset
|
Fix documentation card of adv_glue dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4838
| 2022-08-12T13:15:26
| 2022-08-15T10:17:14
| 2022-08-15T10:02:11
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,337,079,723
| 4,837
|
Add support for CSV metadata files to ImageFolder
|
Fix #4814
|
closed
|
https://github.com/huggingface/datasets/pull/4837
| 2022-08-12T11:19:18
| 2022-08-31T12:01:27
| 2022-08-31T11:59:07
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,337,067,632
| 4,836
|
Is it possible to pass multiple links to a split in load script?
|
**Is your feature request related to a problem? Please describe.**
I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I assumed I could do something like bellow in my loading script:
```python
...
_URL = "MY_DATASET_URL/resolve/main/data/"
_URLS = {
"train": [
"FIRST_URL_TO.txt",
_URL + "train-00000-of-00001-676bfebbc8742592.parquet"
]
}
...
```
but when loading the dataset it raises the following error:
```python
File ~/.local/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
...
668 if isinstance(a, str):
669 # Force-cast str subclasses to str (issue #21127)
670 parts.append(str(a))
TypeError: expected str, bytes or os.PathLike object, not list
```
**Describe the solution you'd like**
I believe since it's possible for `load_dataset` to get list of URLs instead of just a URL for `train` split it can be possible here too.
**Describe alternatives you've considered**
An alternative solution would be to download all needed datasets locally and `push_to_hub` them all, but since the datasets I'm talking about are huge it's not among my options.
**Additional context**
I think loading `text` beside the `parquet` is completely a different issue but I believe I can figure it out by proposing a config for my dataset to load each entry of `_URLS['train']` separately either by `load_dataset("text", ...` or `load_dataset("parquet", ...`.
|
open
|
https://github.com/huggingface/datasets/issues/4836
| 2022-08-12T11:06:11
| 2022-08-12T11:06:11
| null |
{
"login": "sadrasabouri",
"id": 43045767,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,336,994,835
| 4,835
|
Fix documentation card of ethos dataset
|
Fix documentation card of ethos dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4835
| 2022-08-12T09:51:06
| 2022-08-12T13:13:55
| 2022-08-12T12:59:39
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,336,993,511
| 4,834
|
Fix documentation card of recipe_nlg dataset
|
Fix documentation card of recipe_nlg dataset
|
closed
|
https://github.com/huggingface/datasets/pull/4834
| 2022-08-12T09:49:39
| 2022-08-12T11:28:18
| 2022-08-12T11:13:40
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,336,946,965
| 4,833
|
Fix missing tags in dataset cards
|
Fix missing tags in dataset cards:
- boolq
- break_data
- definite_pronoun_resolution
- emo
- kor_nli
- pg19
- quartz
- sciq
- squad_es
- wmt14
- wmt15
- wmt16
- wmt17
- wmt18
- wmt19
- wmt_t2t
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
|
closed
|
https://github.com/huggingface/datasets/pull/4833
| 2022-08-12T09:04:52
| 2022-09-22T14:41:23
| 2022-08-12T09:45:55
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,336,727,389
| 4,832
|
Fix tags in dataset cards
|
Fix wrong tags in dataset cards.
|
closed
|
https://github.com/huggingface/datasets/pull/4832
| 2022-08-12T04:11:23
| 2022-08-12T04:41:55
| 2022-08-12T04:27:24
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,336,199,643
| 4,831
|
Add oversampling strategies to interleave datasets
|
Hello everyone,
Here is a proposal to improve `interleave_datasets` function.
Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list.
I have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https://arxiv.org/pdf/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages.
As in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
How does it work in practice:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
- In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples.
More on the last sentence:
The previous example of `interleave_datasets` was:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12]
With my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives:
>>> dataset["a"]
[10, 0, 11, 1, 2]
because `d1` is already out of samples just after `2` is added.
Example of the results of applying the different strategies:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
**Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.
|
closed
|
https://github.com/huggingface/datasets/pull/4831
| 2022-08-11T16:24:51
| 2023-07-11T15:57:48
| 2022-08-24T16:46:07
|
{
"login": "ylacombe",
"id": 52246514,
"type": "User"
}
|
[] | true
|
[] |
1,336,177,937
| 4,830
|
Fix task tags in dataset cards
| null |
closed
|
https://github.com/huggingface/datasets/pull/4830
| 2022-08-11T16:06:06
| 2022-08-11T16:37:27
| 2022-08-11T16:23:00
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,336,068,068
| 4,829
|
Misalignment between card tag validation and docs
|
## Describe the bug
As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284
the validation of the dataset card tags is not aligned with its documentation: e.g.
- implementation: `license: List[str]`
- docs: `license: Union[str, List[str]]`
They should be aligned.
CC: @julien-c
|
open
|
https://github.com/huggingface/datasets/issues/4829
| 2022-08-11T14:44:45
| 2023-07-21T15:38:02
| null |
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,336,040,168
| 4,828
|
Support PIL Image objects in `add_item`/`add_column`
|
Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR.
|
open
|
https://github.com/huggingface/datasets/pull/4828
| 2022-08-11T14:25:45
| 2023-09-24T10:15:33
| null |
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,335,994,312
| 4,827
|
Add license metadata to pg19
|
As reported over email by Roy Rijkers
|
closed
|
https://github.com/huggingface/datasets/pull/4827
| 2022-08-11T13:52:20
| 2022-08-11T15:01:03
| 2022-08-11T14:46:38
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[] | true
|
[] |
1,335,987,583
| 4,826
|
Fix language tags in dataset cards
|
Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource).
|
closed
|
https://github.com/huggingface/datasets/pull/4826
| 2022-08-11T13:47:14
| 2022-08-11T14:17:48
| 2022-08-11T14:03:12
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,335,856,882
| 4,825
|
[Windows] Fix Access Denied when using os.rename()
|
In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937
|
closed
|
https://github.com/huggingface/datasets/pull/4825
| 2022-08-11T11:57:15
| 2022-08-24T13:09:07
| 2022-08-24T13:09:07
|
{
"login": "DougTrajano",
"id": 8703022,
"type": "User"
}
|
[] | true
|
[] |
1,335,826,639
| 4,824
|
Fix titles in dataset cards
|
Fix all the titles in the dataset cards, so that they conform to the required format.
|
closed
|
https://github.com/huggingface/datasets/pull/4824
| 2022-08-11T11:27:48
| 2022-08-11T13:46:11
| 2022-08-11T12:56:49
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,335,687,033
| 4,823
|
Update data URL in mkqa dataset
|
Update data URL in mkqa dataset.
Fix #4817.
|
closed
|
https://github.com/huggingface/datasets/pull/4823
| 2022-08-11T09:16:13
| 2022-08-11T09:51:50
| 2022-08-11T09:37:52
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,335,664,588
| 4,821
|
Fix train_test_split docs
|
I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.
|
closed
|
https://github.com/huggingface/datasets/pull/4821
| 2022-08-11T08:55:45
| 2022-08-11T09:59:29
| 2022-08-11T09:45:40
|
{
"login": "NielsRogge",
"id": 48327001,
"type": "User"
}
|
[] | true
|
[] |
1,335,117,132
| 4,820
|
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
|
Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
|
closed
|
https://github.com/huggingface/datasets/issues/4820
| 2022-08-10T19:42:33
| 2022-08-10T19:53:10
| 2022-08-10T19:53:10
|
{
"login": "talhaanwarch",
"id": 37379131,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,335,064,449
| 4,819
|
Add missing language tags to resources
|
Add missing language tags to resources, required by existing datasets on GitHub.
|
closed
|
https://github.com/huggingface/datasets/pull/4819
| 2022-08-10T19:06:42
| 2022-08-10T19:45:49
| 2022-08-10T19:32:15
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,334,941,810
| 4,818
|
Add add cc-by-sa-2.5 license tag
|
- [ ] add it to moon-landing
- [ ] add it to hub-docs
|
closed
|
https://github.com/huggingface/datasets/pull/4818
| 2022-08-10T17:18:39
| 2022-10-04T13:47:24
| 2022-10-04T13:47:24
|
{
"login": "polinaeterna",
"id": 16348744,
"type": "User"
}
|
[] | true
|
[] |
1,334,572,163
| 4,817
|
Outdated Link for mkqa Dataset
|
## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
|
closed
|
https://github.com/huggingface/datasets/issues/4817
| 2022-08-10T12:45:45
| 2022-08-11T09:37:52
| 2022-08-11T09:37:52
|
{
"login": "liaeh",
"id": 52380283,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,334,099,454
| 4,816
|
Update version of opus_paracrawl dataset
|
This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815.
|
closed
|
https://github.com/huggingface/datasets/pull/4816
| 2022-08-10T05:39:44
| 2022-08-12T14:32:29
| 2022-08-12T14:17:56
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,334,078,303
| 4,815
|
Outdated loading script for OPUS ParaCrawl dataset
|
## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
|
closed
|
https://github.com/huggingface/datasets/issues/4815
| 2022-08-10T05:12:34
| 2022-08-12T14:17:57
| 2022-08-12T14:17:57
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
1,333,356,230
| 4,814
|
Support CSV as metadata file format in AudioFolder/ImageFolder
|
Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets.
|
closed
|
https://github.com/huggingface/datasets/issues/4814
| 2022-08-09T14:36:49
| 2022-08-31T11:59:08
| 2022-08-31T11:59:08
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,333,287,756
| 4,813
|
Fix loading example in opus dataset cards
|
This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a missing citation reference for opus_wikipedia
Related to:
- #4806
|
closed
|
https://github.com/huggingface/datasets/pull/4813
| 2022-08-09T13:47:38
| 2022-08-09T17:52:15
| 2022-08-09T17:38:18
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,333,051,730
| 4,812
|
Fix bug in function validate_type for Python >= 3.9
|
Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811.
|
closed
|
https://github.com/huggingface/datasets/pull/4812
| 2022-08-09T10:32:42
| 2022-08-12T13:41:23
| 2022-08-12T13:27:04
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,333,043,421
| 4,811
|
Bug in function validate_type for Python >= 3.9
|
## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
|
closed
|
https://github.com/huggingface/datasets/issues/4811
| 2022-08-09T10:25:21
| 2022-08-12T13:27:05
| 2022-08-12T13:27:05
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,333,038,702
| 4,810
|
Add description to hellaswag dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4810
| 2022-08-09T10:21:14
| 2022-09-23T11:35:38
| 2022-09-23T11:33:44
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
1,332,842,747
| 4,809
|
Complete the mlqa dataset card
|
I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808.
|
closed
|
https://github.com/huggingface/datasets/pull/4809
| 2022-08-09T07:38:06
| 2022-08-09T16:26:21
| 2022-08-09T13:26:43
|
{
"login": "el2e10",
"id": 7940237,
"type": "User"
}
|
[] | true
|
[] |
1,332,840,217
| 4,808
|
Add more information to the dataset card of mlqa dataset
| null |
closed
|
https://github.com/huggingface/datasets/issues/4808
| 2022-08-09T07:35:42
| 2022-08-09T13:33:23
| 2022-08-09T13:33:23
|
{
"login": "el2e10",
"id": 7940237,
"type": "User"
}
|
[] | false
|
[] |
1,332,784,110
| 4,807
|
document fix in opus_gnome dataset
|
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
|
closed
|
https://github.com/huggingface/datasets/pull/4807
| 2022-08-09T06:38:13
| 2022-08-09T07:28:03
| 2022-08-09T07:28:03
|
{
"login": "gojiteji",
"id": 38291975,
"type": "User"
}
|
[] | true
|
[] |
1,332,664,038
| 4,806
|
Fix opus_gnome dataset card
|
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805
|
closed
|
https://github.com/huggingface/datasets/pull/4806
| 2022-08-09T03:40:15
| 2022-08-09T12:06:46
| 2022-08-09T11:52:04
|
{
"login": "gojiteji",
"id": 38291975,
"type": "User"
}
|
[] | true
|
[] |
1,332,653,531
| 4,805
|
Wrong example in opus_gnome dataset card
|
## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected results
```bash
100%
1/1 [00:00<00:00, 42.09it/s]
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 8368
})
})
```
## Actual results
```bash
Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/gnome/gnome.py
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
|
closed
|
https://github.com/huggingface/datasets/issues/4805
| 2022-08-09T03:21:27
| 2022-08-09T11:52:05
| 2022-08-09T11:52:05
|
{
"login": "gojiteji",
"id": 38291975,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,332,630,358
| 4,804
|
streaming dataset with concatenating splits raises an error
|
## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation", streaming=True)
```
```sh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>()
3 # error
4 repo = "nateraw/ade20k-tiny"
----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True)
1 frames
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1030 splits_generator = splits_generators[split]
1031 else:
-> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
1033
1034 # Create a dataset for each of the given splits
ValueError: Bad split: train+validation. Available splits: ['validation', 'train']
```
[Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)
## Expected results
load successfully or throws an error saying it is not supported.
## Actual results
above
## Environment info
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
open
|
https://github.com/huggingface/datasets/issues/4804
| 2022-08-09T02:41:56
| 2023-11-25T14:52:09
| null |
{
"login": "Bing-su",
"id": 37621276,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,332,079,562
| 4,803
|
Support `pipeline` argument in inspect.py functions
|
**Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
|
open
|
https://github.com/huggingface/datasets/issues/4803
| 2022-08-08T16:01:24
| 2023-09-25T12:21:35
| null |
{
"login": "severo",
"id": 1676121,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,331,676,691
| 4,802
|
`with_format` behavior is inconsistent on different datasets
|
## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw = raw.select(range(100))
tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
def preprocess_func(examples):
return tokenizer(examples["sentence"], padding=True, max_length=256, truncation=True)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["input_ids"]))
data = data.with_format("torch", columns=["input_ids"])
print(type(data[0]["input_ids"]))
```
printing as expected:
```python
<class 'list'>
<class 'torch.Tensor'>
```
Then run:
```python
raw = load_dataset("beans", split="train")
raw = raw.select(range(100))
preprocessor = AutoFeatureExtractor.from_pretrained("nateraw/vit-base-beans")
def preprocess_func(examples):
imgs = [img.convert("RGB") for img in examples["image"]]
return preprocessor(imgs)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["pixel_values"]))
data = data.with_format("torch", columns=["pixel_values"])
print(type(data[0]["pixel_values"]))
```
Printing, unexpectedly
```python
<class 'list'>
<class 'list'>
```
## Expected results
`with_format` should transform into the requested format; it's not the case.
## Actual results
`type(data[0]["pixel_values"])` should be `torch.Tensor` in the example above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: dev version, commit 44af3fafb527302282f6b6507b952de7435f0979
- Platform: Linux
- Python version: 3.9.12
- PyArrow version: 7.0.0
|
open
|
https://github.com/huggingface/datasets/issues/4802
| 2022-08-08T10:41:34
| 2022-08-09T16:49:09
| null |
{
"login": "fxmarty",
"id": 9808326,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.