id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,209,429,743
| 4,185
|
Librispeech documentation, clarification on format
|
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audio
> file to a float32 array, please make use of the `.map()` function as follows:
>
> ```python
> import soundfile as sf
> def map_to_array(batch):
> speech_array, _ = sf.read(batch["file"])
> batch["speech"] = speech_array
> return batch
> dataset = dataset.map(map_to_array, remove_columns=["file"])
> ```
Is this still true?
In my case, `ds["train.100"]` returns:
```
Dataset({
features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'],
num_rows: 28539
})
```
and taking the first instance yields:
```
{'file': '374-180298-0000.flac',
'audio': {'path': '374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED',
'speaker_id': 374,
'chapter_id': 180298,
'id': '374-180298-0000'}
```
The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong?
But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk?
Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk?
A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?
|
open
|
https://github.com/huggingface/datasets/issues/4185
| 2022-04-20T09:35:55
| 2022-04-21T11:00:53
| null |
{
"login": "albertz",
"id": 59132,
"type": "User"
}
|
[] | false
|
[] |
1,208,592,669
| 4,184
|
[Librispeech] Add 'all' config
|
Add `"all"` config to Librispeech
Closed #4179
|
closed
|
https://github.com/huggingface/datasets/pull/4184
| 2022-04-19T16:27:56
| 2024-08-02T05:03:04
| 2022-04-22T09:45:17
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true
|
[] |
1,208,449,335
| 4,183
|
Document librispeech configs
|
Added an example of how to load one config or the other
|
closed
|
https://github.com/huggingface/datasets/pull/4183
| 2022-04-19T14:26:59
| 2023-09-24T10:02:24
| 2022-04-19T15:15:20
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,208,285,235
| 4,182
|
Zenodo.org download is not responding
|
## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original data like s3 bucket.
## Steps to reproduce the bug
```python
load_dataset("sick")
```
## Expected results
Dataset should be downloaded.
## Actual results
ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)")))
## Environment info
- `datasets` version: 2.1.0
- Platform: Darwin-21.4.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
|
closed
|
https://github.com/huggingface/datasets/issues/4182
| 2022-04-19T12:26:57
| 2022-04-20T07:11:05
| 2022-04-20T07:11:05
|
{
"login": "dkajtoch",
"id": 32985207,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,208,194,805
| 4,181
|
Support streaming FLEURS dataset
|
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
Am I the one who added this dataset ? Yes
Can I fix this somehow in the script? @lhoestq @severo
|
closed
|
https://github.com/huggingface/datasets/issues/4181
| 2022-04-19T11:09:56
| 2022-07-25T11:44:02
| 2022-07-25T11:44:02
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
1,208,042,320
| 4,180
|
Add some iteration method on a dataset column (specific for inference)
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
For a non breaking change:
```python
for audio in dataset.iterate("audio"):
# {"array": np.array(...), "sampling_rate":...}
```
For a breaking change solution (not necessary), changing the type of `dataset["audio"]` to a sequence type so that
```python
pipe = pipeline(model="...")
for out in pipe(dataset["audio"]):
# {"text":....}
```
could work
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
```python
def iterate(dataset, key):
for item in dataset:
yield dataset[key]
for out in pipeline(iterate(dataset, "audio")):
# {"array": ...}
```
This works but requires the helper function which feels slightly clunky.
**Additional context**
Add any other context about the feature request here.
The context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https://github.com/huggingface/transformers/pull/16723/files
@lhoestq
|
closed
|
https://github.com/huggingface/datasets/issues/4180
| 2022-04-19T09:15:45
| 2025-06-17T13:08:50
| 2025-06-17T13:08:50
|
{
"login": "Narsil",
"id": 204321,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,208,001,118
| 4,179
|
Dataset librispeech_asr fails to load
|
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other).
However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want.
Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training.
## Actual results
```
...
File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators
line: archive_path = dl_manager.download(_DL_URLS[self.config.name])
locals:
archive_path = <not found>
dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>
dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>>
_DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'...
self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310>
self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None)
self.config.name = <local> 'default', len = 7
KeyError: 'default'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31
- Python version: 3.9.9
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
|
closed
|
https://github.com/huggingface/datasets/issues/4179
| 2022-04-19T08:45:48
| 2022-07-27T16:10:00
| 2022-07-27T16:10:00
|
{
"login": "albertz",
"id": 59132,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,207,787,073
| 4,178
|
[feat] Add ImageNet dataset
|
To use the dataset download the tar file
[imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using:
```py
from datasets import load_dataset
dataset = load_dataset("imagenet",
data_dir="/path/to/imagenet_object_localization_patched2019.tar.gz")
```
Currently train and validation splits are supported.
|
closed
|
https://github.com/huggingface/datasets/pull/4178
| 2022-04-19T06:01:35
| 2022-04-29T21:43:59
| 2022-04-29T21:37:08
|
{
"login": "apsdehal",
"id": 3616806,
"type": "User"
}
|
[] | true
|
[] |
1,207,535,920
| 4,177
|
Adding missing subsets to the `SemEval-2018 Task 1` dataset
|
This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, broken down by emotions (anger, fear, joy, sadness).
## Remaining questions
I wasn't able to find any documentation about how one should make PRs to modify datasets. Because of that, I just did my best to integrate the new data into the code, and tested locally that this worked. I'm sorry if I'm not respecting your contributing guidelines – if they are documented somewhere, I'd appreciate if you could send a pointer!
Not sure how `dataset_infos.json` and `dummy` should be updated. My understanding is that they were automatically generated at the time of the original dataset creation?
|
open
|
https://github.com/huggingface/datasets/pull/4177
| 2022-04-18T22:59:30
| 2022-10-05T10:38:16
| null |
{
"login": "micahcarroll",
"id": 11460267,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
1,206,515,563
| 4,176
|
Very slow between two operations
|
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc = "running split para ==>")\
.filter(lambda example: example['text1']!='' and example['text2']!='',
num_proc=args.preprocessing_num_workers, desc="filtering ==>")
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset===>",
)
```
|
closed
|
https://github.com/huggingface/datasets/issues/4176
| 2022-04-17T23:52:29
| 2022-04-18T00:03:00
| 2022-04-18T00:03:00
|
{
"login": "yanan1116",
"id": 26405281,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,205,589,842
| 4,175
|
Add WIT Dataset
|
closes #2981 #2810
@nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset
|
closed
|
https://github.com/huggingface/datasets/pull/4175
| 2022-04-15T13:42:32
| 2023-09-24T10:02:38
| 2022-05-02T14:26:41
|
{
"login": "thomasw21",
"id": 24695242,
"type": "User"
}
|
[] | true
|
[] |
1,205,575,941
| 4,174
|
Fix when map function modifies input in-place
|
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
|
closed
|
https://github.com/huggingface/datasets/pull/4174
| 2022-04-15T13:23:15
| 2022-04-15T14:52:07
| 2022-04-15T14:45:58
|
{
"login": "thomasw21",
"id": 24695242,
"type": "User"
}
|
[] | true
|
[] |
1,204,657,114
| 4,173
|
Stream private zipped images
|
As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub.
This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository.
In this PR I added authentication to `Image.decode_example` via a `token_per_repo_id` optional argument. I first wanted to just pass `use_auth_token` but a single `Image` instance can be responsible of decoding images from a combination of several datasets together (from `interleave_datasets` for example). Therefore I just used a dictionary `repo_id` -> `token` instead.
I'm getting the `repo_id` from the dataset builder (I replaced the `namepace` attribute with `repo_id`)
I did the same for `Audio.decode_example`
cc @SBrandeis @severo
|
closed
|
https://github.com/huggingface/datasets/pull/4173
| 2022-04-14T15:15:07
| 2022-05-05T14:05:54
| 2022-05-05T13:58:35
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,204,433,160
| 4,172
|
Update assin2 dataset_infos.json
|
Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset
|
closed
|
https://github.com/huggingface/datasets/pull/4172
| 2022-04-14T11:53:06
| 2022-04-15T14:47:42
| 2022-04-15T14:41:22
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,204,413,620
| 4,170
|
to_tf_dataset rewrite
|
This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are:
- Much better stability and no more dropping unexpected column names (Sorry @NielsRogge)
- Doesn't clobber custom transforms on the data (Sorry @NielsRogge again)
- Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset.
- Better inference of shapes and data types
- Lots of hacky special-casing code removed
- Can return string columns (as `tf.String`)
- Most arguments have default values, calling the method should be much simpler
- ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~
- Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook.
I still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point!
|
closed
|
https://github.com/huggingface/datasets/pull/4170
| 2022-04-14T11:30:58
| 2022-06-06T14:31:12
| 2022-06-06T14:22:09
|
{
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
}
|
[] | true
|
[] |
1,203,995,869
| 4,169
|
Timit_asr dataset cannot be previewed recently
|
## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No
|
closed
|
https://github.com/huggingface/datasets/issues/4169
| 2022-04-14T03:28:31
| 2023-02-03T04:54:57
| 2022-05-06T16:06:51
|
{
"login": "YingLi001",
"id": 75192317,
"type": "User"
}
|
[] | false
|
[] |
1,203,867,540
| 4,168
|
Add code examples to API docs
|
This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:
- Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> code example goes here
```
- Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?
- For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed.
- Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient?
Thanks :)
|
closed
|
https://github.com/huggingface/datasets/pull/4168
| 2022-04-13T23:03:38
| 2022-04-27T18:53:37
| 2022-04-27T18:48:34
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "documentation",
"color": "0075ca"
}
] | true
|
[] |
1,203,761,614
| 4,167
|
Avoid rate limit in update hub repositories
|
use http.extraHeader to avoid rate limit
|
closed
|
https://github.com/huggingface/datasets/pull/4167
| 2022-04-13T20:32:17
| 2022-04-13T20:56:41
| 2022-04-13T20:50:32
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,203,758,004
| 4,166
|
Fix exact match
|
Clarify docs and add clarifying example to the exact_match metric
|
closed
|
https://github.com/huggingface/datasets/pull/4166
| 2022-04-13T20:28:06
| 2022-05-03T12:23:31
| 2022-05-03T12:16:27
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,203,730,187
| 4,165
|
Fix google bleu typos, examples
| null |
closed
|
https://github.com/huggingface/datasets/pull/4165
| 2022-04-13T19:59:54
| 2022-05-03T12:23:52
| 2022-05-03T12:16:44
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,203,661,346
| 4,164
|
Fix duplicate key in multi_news
|
To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928
|
closed
|
https://github.com/huggingface/datasets/pull/4164
| 2022-04-13T18:48:24
| 2022-04-13T21:04:16
| 2022-04-13T20:58:02
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,203,539,268
| 4,163
|
Optional Content Warning for Datasets
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an option to select a content warning message that appears before the dataset preview? Otherwise, people immediately see hate speech when clicking on this dataset.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Implementation of a content warning message that separates users from the dataset preview until they click out of the warning.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Possibly just a way to remove the dataset preview completely? I think I like the content warning option better, though.
**Additional context**
Add any other context about the feature request here.
|
open
|
https://github.com/huggingface/datasets/issues/4163
| 2022-04-13T16:38:01
| 2022-06-09T20:39:02
| null |
{
"login": "TristanThrush",
"id": 20826878,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,203,421,909
| 4,162
|
Add Conceptual 12M
| null |
closed
|
https://github.com/huggingface/datasets/pull/4162
| 2022-04-13T14:57:23
| 2022-04-15T08:13:01
| 2022-04-15T08:06:25
|
{
"login": "thomasw21",
"id": 24695242,
"type": "User"
}
|
[] | true
|
[] |
1,203,230,485
| 4,161
|
Add Visual Genome
| null |
closed
|
https://github.com/huggingface/datasets/pull/4161
| 2022-04-13T12:25:24
| 2022-04-21T15:42:49
| 2022-04-21T13:08:52
|
{
"login": "thomasw21",
"id": 24695242,
"type": "User"
}
|
[] | true
|
[] |
1,202,845,874
| 4,160
|
RGBA images not showing
|
## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent
[**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent)

Am I the one who added this dataset ? Yes
👉 More of a general issue of 'RGBA' png images not being supported
(the dataset itself is just for the huggan sprint and not that important, consider it just an example)
|
closed
|
https://github.com/huggingface/datasets/issues/4160
| 2022-04-13T06:59:23
| 2022-06-21T16:43:11
| 2022-06-21T16:43:11
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "dataset-viewer-rgba-images",
"color": "6C5FC0"
}
] | false
|
[] |
1,202,522,153
| 4,159
|
Add `TruthfulQA` dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4159
| 2022-04-12T23:19:04
| 2022-06-08T15:51:33
| 2022-06-08T14:43:34
|
{
"login": "jon-tow",
"id": 41410219,
"type": "User"
}
|
[] | true
|
[] |
1,202,376,843
| 4,158
|
Add AUC ROC Metric
| null |
closed
|
https://github.com/huggingface/datasets/pull/4158
| 2022-04-12T20:53:28
| 2022-04-26T19:41:50
| 2022-04-26T19:35:22
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,202,239,622
| 4,157
|
Fix formatting in BLEU metric card
|
Fix #4148
|
closed
|
https://github.com/huggingface/datasets/pull/4157
| 2022-04-12T18:29:51
| 2022-04-13T14:30:25
| 2022-04-13T14:16:34
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,202,220,531
| 4,156
|
Adding STSb-TR dataset
|
Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added.
|
closed
|
https://github.com/huggingface/datasets/pull/4156
| 2022-04-12T18:10:05
| 2022-10-03T09:36:25
| 2022-10-03T09:36:25
|
{
"login": "figenfikri",
"id": 12762065,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
1,202,183,608
| 4,155
|
Make HANS dataset streamable
|
Fix #4133
|
closed
|
https://github.com/huggingface/datasets/pull/4155
| 2022-04-12T17:34:13
| 2022-04-13T12:03:46
| 2022-04-13T11:57:35
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,202,145,721
| 4,154
|
Generate tasks.json taxonomy from `huggingface_hub`
| null |
closed
|
https://github.com/huggingface/datasets/pull/4154
| 2022-04-12T17:12:46
| 2022-04-14T10:32:32
| 2022-04-14T10:26:13
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[] | true
|
[] |
1,202,040,506
| 4,153
|
Adding Text-based NP Enrichment (TNE) dataset
|
Added the [TNE](https://github.com/yanaiela/TNE) dataset to the library
|
closed
|
https://github.com/huggingface/datasets/pull/4153
| 2022-04-12T15:47:03
| 2022-05-03T14:05:48
| 2022-05-03T14:05:48
|
{
"login": "yanaiela",
"id": 8031035,
"type": "User"
}
|
[] | true
|
[] |
1,202,034,115
| 4,152
|
ArrayND error in pyarrow 5
|
As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(arr, feature_type)
```
raises
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-04610f9fa78c> in <module>
----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype="int32"))
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1807 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1809 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1810
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_number_to_str)
1705 array = array.storage
1706 if isinstance(pa_type, pa.ExtensionType):
-> 1707 return pa_type.wrap_array(array)
1708 elif pa.types.is_struct(array.type):
1709 if pa.types.is_struct(pa_type) and (
AttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array'
```
The thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails.
`wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5
|
closed
|
https://github.com/huggingface/datasets/issues/4152
| 2022-04-12T15:41:40
| 2022-05-04T09:29:46
| 2022-05-04T09:29:46
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | false
|
[] |
1,201,837,999
| 4,151
|
Add missing label for emotion description
| null |
closed
|
https://github.com/huggingface/datasets/pull/4151
| 2022-04-12T13:17:37
| 2022-04-12T13:58:50
| 2022-04-12T13:58:50
|
{
"login": "lijiazheng99",
"id": 44396506,
"type": "User"
}
|
[] | true
|
[] |
1,201,689,730
| 4,150
|
Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split)
|
## Describe the bug
Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users.
## Steps to reproduce the bug
* If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/datasets/nateraw/test-imagefolder-dataset)):
```python
ds = load_dataset("nateraw/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* If you do the same from locally stored data specifying only directory path you'll get the same:
```python
ds = load_dataset("/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* However, if you explicitely specify package name (like `imagefolder`, `csv`, `json`), all the data is put into a single split:
```python
ds = load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10
})
})
```
## Expected results
For `load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")` I expect the same output as of the two first options.
|
closed
|
https://github.com/huggingface/datasets/issues/4150
| 2022-04-12T11:15:55
| 2022-04-28T21:02:44
| 2022-04-28T21:02:44
|
{
"login": "polinaeterna",
"id": 16348744,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,201,389,221
| 4,149
|
load_dataset for winoground returning decoding error
|
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected results
I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing
```python
import json
with open('examples.jsonl', 'r') as f:
examples = f.read().split('\n')
# Thinking this would error if the JSON is not utf-8 encoded
json_data = [json.loads(x) for x in examples]
print(json_data[-1])
```
and I see
```python
{'caption_0': 'someone is overdoing it',
'caption_1': 'someone is doing it over',
'collapsed_tag': 'Relation',
'id': 399,
'image_0': 'ex_399_img_0',
'image_1': 'ex_399_img_1',
'num_main_preds': 1,
'secondary_tag': 'Morpheme-Level',
'tag': 'Scope, Preposition'}
```
so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.
## Actual results
During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).
```
datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files)
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
|
closed
|
https://github.com/huggingface/datasets/issues/4149
| 2022-04-12T08:16:16
| 2022-05-04T23:40:38
| 2022-05-04T23:40:38
|
{
"login": "odellus",
"id": 4686956,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,201,169,242
| 4,148
|
fix confusing bleu metric example
|
**Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this.
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
>>> references = [
... [["hello", "there", "general", "kenobi"]],
... [["foo", "bar", "foobar"]]
... ]
>>> bleu = datasets.load_metric("bleu")
>>> results = bleu.compute(predictions=predictions, references=references)
>>> print(results)
{'bleu': 0.6370964381207871, ...
```
**Describe the solution you'd like**
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
# and
>>> print(results)
{'bleu':1.0, ...
```
|
closed
|
https://github.com/huggingface/datasets/issues/4148
| 2022-04-12T06:18:26
| 2022-04-13T14:16:34
| 2022-04-13T14:16:34
|
{
"login": "aizawa-naoki",
"id": 6253193,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,200,756,008
| 4,147
|
Adjust path to datasets tutorial in How-To
|
The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https://github.com/huggingface/datasets/blob/master/docs/source/tutorial.md.
(Edit to add: The link in the PR deployment (https://moon-ci-docs.huggingface.co/docs/datasets/pr_4147/en/how_to) is also broken since it's actually hardcoded to `master` and not dynamic to the branch name, but other links seem to behave similarly.)
|
closed
|
https://github.com/huggingface/datasets/pull/4147
| 2022-04-12T01:20:34
| 2022-04-12T08:32:24
| 2022-04-12T08:26:02
|
{
"login": "NimaBoscarino",
"id": 6765188,
"type": "User"
}
|
[] | true
|
[] |
1,200,215,789
| 4,146
|
SAMSum dataset viewer not working
|
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
closed
|
https://github.com/huggingface/datasets/issues/4146
| 2022-04-11T16:22:57
| 2022-04-29T16:26:09
| 2022-04-29T16:26:09
|
{
"login": "aakashnegi10",
"id": 39906333,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,200,209,781
| 4,145
|
Redirect TIMIT download from LDC
|
LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various corpus-specific license agreements specifically state that users cannot publish, retransmit, disclose, copy, reproduce or redistribute LDC databases to others outside their organizations.
LDC explicitly asked us to remove the download script for the TIMIT dataset. In this PR I remove all means to download the dataset, and redirect users to download the data from https://catalog.ldc.upenn.edu/LDC93S1
|
closed
|
https://github.com/huggingface/datasets/pull/4145
| 2022-04-11T16:17:55
| 2022-04-13T15:39:31
| 2022-04-13T15:33:04
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,200,016,983
| 4,144
|
Fix splits in local packaged modules, local datasets without script and hub datasets without script
|
fixes #4150
I suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir/**` patterns and putting them all into a single default (train) split.
I would also suggest to align `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` with this logic (remove `data_files = os.path.join(data_dir, "**")`). It's not reflected in the current code now as I'd like to discuss it cause I might be unaware of some use cases. @lhoestq @mariosasko @albertvillanova WDYT?
|
closed
|
https://github.com/huggingface/datasets/pull/4144
| 2022-04-11T13:57:33
| 2022-04-29T09:12:14
| 2022-04-28T21:02:45
|
{
"login": "polinaeterna",
"id": 16348744,
"type": "User"
}
|
[] | true
|
[] |
1,199,937,961
| 4,143
|
Unable to download `Wikepedia` 20220301.en version
|
## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu
- Python version: 3.6
- PyArrow version: 6.0.1
|
closed
|
https://github.com/huggingface/datasets/issues/4143
| 2022-04-11T13:00:14
| 2022-08-17T00:37:55
| 2022-04-21T17:04:14
|
{
"login": "beyondguo",
"id": 37113676,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,199,794,750
| 4,142
|
Add ObjectFolder 2.0 dataset
|
## Adding a Dataset
- **Name:** ObjectFolder 2.0
- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.
- **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389)
- **Data:** https://github.com/rhgao/ObjectFolder
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
open
|
https://github.com/huggingface/datasets/issues/4142
| 2022-04-11T10:57:51
| 2022-10-05T10:30:49
| null |
{
"login": "osanseviero",
"id": 7246357,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
1,199,610,885
| 4,141
|
Why is the dataset not visible under the dataset preview section?
|
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
closed
|
https://github.com/huggingface/datasets/issues/4141
| 2022-04-11T08:36:42
| 2022-04-11T18:55:32
| 2022-04-11T17:09:49
|
{
"login": "Nid989",
"id": 75028682,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,199,492,356
| 4,140
|
Error loading arxiv data set
|
## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv')
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 522, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']
```
I then tried to ignore verification steps by `ignore_verifications=True` and there is another error.
```
Traceback (most recent call last):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 810, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/datasets/scientific_papers/9e4f2cfe3d8494e9f34a84ce49c3214605b4b52a3d8eb199104430d04c52cc12/scientific_papers.py", line 108, in _generate_examples
with open(path, encoding="utf-8") as f:
NotADirectoryError: [Errno 20] Not a directory: '/home/username/.cache/huggingface/datasets/downloads/c0deae7af7d9c87f25dfadf621f7126f708d7dcac6d353c7564883084a000076/arxiv-dataset/train.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv', ignore_verifications=True)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 539, in _download_and_prepare
raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
OSError: Cannot find data file.
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
|
closed
|
https://github.com/huggingface/datasets/issues/4140
| 2022-04-11T07:06:34
| 2022-04-12T16:24:08
| 2022-04-12T16:24:08
|
{
"login": "yjqiu",
"id": 5383918,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,199,443,822
| 4,139
|
Dataset viewer issue for Winoground
|
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.
Am I the one who added this dataset ? No
|
closed
|
https://github.com/huggingface/datasets/issues/4139
| 2022-04-11T06:11:41
| 2022-06-21T16:43:58
| 2022-06-21T16:43:58
|
{
"login": "alcinos",
"id": 7438704,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "dataset-viewer-gated",
"color": "51F745"
}
] | false
|
[] |
1,199,291,730
| 4,138
|
Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
|
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdown'
Couldn't find where "xwalk" come from. How can I fix this?
Am I the one who added this dataset ? Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4138
| 2022-04-11T02:07:13
| 2022-04-19T03:15:46
| 2022-04-16T15:46:29
|
{
"login": "iluvvatar",
"id": 55381086,
"type": "User"
}
|
[] | false
|
[] |
1,199,000,453
| 4,137
|
Add single dataset citations for TweetEval
|
This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://github.com/cardiffnlp/tweeteval#citing-tweeteval
(just to be sure that the creator of the single datasets also get credits when tweeteval is used)
Please let me know if this looks okay or if any changes are needed.
Thanks,
Gunjan
|
closed
|
https://github.com/huggingface/datasets/pull/4137
| 2022-04-10T11:51:54
| 2022-04-12T07:57:22
| 2022-04-12T07:51:15
|
{
"login": "gchhablani",
"id": 29076344,
"type": "User"
}
|
[] | true
|
[] |
1,198,307,610
| 4,135
|
Support streaming xtreme dataset for PAN-X config
|
Support streaming xtreme dataset for PAN-X config.
|
closed
|
https://github.com/huggingface/datasets/pull/4135
| 2022-04-09T06:19:48
| 2022-05-06T08:39:40
| 2022-04-11T06:54:14
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,197,937,146
| 4,134
|
ELI5 supporting documents
|
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
|
open
|
https://github.com/huggingface/datasets/issues/4134
| 2022-04-08T23:36:27
| 2022-04-13T13:52:46
| null |
{
"login": "saurabh-0077",
"id": 69015896,
"type": "User"
}
|
[
{
"name": "question",
"color": "d876e3"
}
] | false
|
[] |
1,197,830,623
| 4,133
|
HANS dataset preview broken
|
## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
|
closed
|
https://github.com/huggingface/datasets/issues/4133
| 2022-04-08T21:06:15
| 2022-04-13T11:57:34
| 2022-04-13T11:57:34
|
{
"login": "pietrolesci",
"id": 61748653,
"type": "User"
}
|
[
{
"name": "streaming",
"color": "fef2c0"
}
] | false
|
[] |
1,197,661,720
| 4,132
|
Support streaming xtreme dataset for PAWS-X config
|
Support streaming xtreme dataset for PAWS-X config.
|
closed
|
https://github.com/huggingface/datasets/pull/4132
| 2022-04-08T18:25:32
| 2022-05-06T08:39:42
| 2022-04-08T21:02:44
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,197,472,249
| 4,131
|
Support streaming xtreme dataset for udpos config
|
Support streaming xtreme dataset for udpos config.
|
closed
|
https://github.com/huggingface/datasets/pull/4131
| 2022-04-08T15:30:49
| 2022-05-06T08:39:46
| 2022-04-08T16:28:07
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,197,456,857
| 4,130
|
Add SBU Captions Photo Dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4130
| 2022-04-08T15:17:39
| 2022-04-12T10:47:31
| 2022-04-12T10:41:29
|
{
"login": "thomasw21",
"id": 24695242,
"type": "User"
}
|
[] | true
|
[] |
1,197,376,796
| 4,129
|
dataset metadata for reproducibility
|
When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of the same dataset.
The dataset could have a list of “source datasets” metadata and ignore what happens to them before arriving in the Trainer (i.e. ignore mapping, filtering, etc.).
Here is a basic representation (made by @lhoestq )
```python
>>> from datasets import load_dataset
>>>
>>> my_dataset = load_dataset(...)["train"]
>>> my_dataset = my_dataset.map(...)
>>>
>>> my_dataset.sources
[HFHubDataset(repo_id=..., revision=..., arguments={...})]
```
|
open
|
https://github.com/huggingface/datasets/issues/4129
| 2022-04-08T14:17:28
| 2023-09-29T09:23:56
| null |
{
"login": "nbroad1881",
"id": 24982805,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,197,326,311
| 4,128
|
More robust `cast_to_python_objects` in `TypedSequence`
|
Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails to recognize that it needs to cast `PIL.Image` objects if they are not at the beginning of the sequence and stops after the first image dictionary (e.g., if `data` is `[{"bytes": None, "path": "some path"}, PIL.Image(), ...]`)
Fix #4124
|
closed
|
https://github.com/huggingface/datasets/pull/4128
| 2022-04-08T13:33:35
| 2022-04-13T14:07:41
| 2022-04-13T14:01:16
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,197,297,756
| 4,127
|
Add configs with processed data in medical_dialog dataset
|
There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122.
|
closed
|
https://github.com/huggingface/datasets/pull/4127
| 2022-04-08T13:08:16
| 2022-05-06T08:39:50
| 2022-04-08T16:20:51
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,196,665,194
| 4,126
|
dataset viewer issue for common_voice
|
## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
|
closed
|
https://github.com/huggingface/datasets/issues/4126
| 2022-04-07T23:34:28
| 2022-04-25T13:42:17
| 2022-04-25T13:42:16
|
{
"login": "laphang",
"id": 24724502,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "audio_column",
"color": "F83ACF"
}
] | false
|
[] |
1,196,633,936
| 4,125
|
BIG-bench
|
This PR adds all BIG-bench json tasks to huggingface/datasets.
|
closed
|
https://github.com/huggingface/datasets/pull/4125
| 2022-04-07T22:33:30
| 2022-06-08T17:57:48
| 2022-06-08T17:32:32
|
{
"login": "andersjohanandreassen",
"id": 43357549,
"type": "User"
}
|
[] | true
|
[] |
1,196,469,842
| 4,124
|
Image decoding often fails when transforming Image datasets
|
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes:
```
[{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf ....
```
## Steps to reproduce the bug
```python
from datasets import load_dataset, Dataset
import numpy as np
# seeded NumPy random number generator for reprodducinble results.
rng = np.random.default_rng(seed=0)
test_dataset = load_dataset('cifar100', split="test")
def preprocess_data(dataset):
"""
Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and
add is_flipped column
Args:
dataset: HuggingFace CIFAR-100 Dataset Object
Returns:
new_dataset: A Dataset object with "img" and "is_flipped" columns only
"""
# remove fine_label and coarse_label columns
new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])
# add the column for is_flipped
new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8))
return new_dataset
def generate_flipped_data(example, p=0.5):
"""
A Dataset mapping function that transforms some of the images up-side-down.
If the probability value (p) is 0.5 approximately half the images will be flipped upside-down
Args:
example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair
p: the probability of flipping the image up-side-down, Default 0.5
Returns:
example: A Dataset object
"""
# example['img'] = example['img']
if rng.random() > p: # the flip the image and set is_flipped column to 1
example['img'] = example['img'].transpose(
1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)
example['is_flipped'] = 1
return example
my_test = preprocess_data(test_dataset)
my_test = my_test.map(generate_flipped_data)
```
## Expected results
The dataset should be transformed without problems.
## Actual results
```
/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s]
Traceback (most recent call last):
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single
writer.write(example)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module>
my_test = my_test.map(generate_flipped_data)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map
return self._map_single(
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single
writer.finalize()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
Process finished with exit code 1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux(Fedora 35)
- Python version: 3.10
- PyArrow version: 7.0.0
|
closed
|
https://github.com/huggingface/datasets/issues/4124
| 2022-04-07T19:17:25
| 2022-04-13T14:01:16
| 2022-04-13T14:01:16
|
{
"login": "RafayAK",
"id": 17025191,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,196,367,512
| 4,123
|
Building C4 takes forever
|
## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load("c4", "en")
```
## Expected results
I would like to be able to download pre-split data.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
closed
|
https://github.com/huggingface/datasets/issues/4123
| 2022-04-07T17:41:30
| 2023-06-26T22:01:29
| 2023-06-26T22:01:29
|
{
"login": "StellaAthena",
"id": 15899312,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,196,095,072
| 4,122
|
medical_dialog zh has very slow _generate_examples
|
## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud.
```python
file_ids = [
"1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E",
"1tt7weAT1SZknzRFyLXOT2fizceUUVRXX",
"1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc",
"1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J",
"1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu",
"1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP",
"1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c",
"1pA3bCFA5nZDhsQutqsJcH3d712giFb0S",
"1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU",
"1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD",
"1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH",
]
for i in file_ids:
url = f"https://drive.google.com/uc?id={i}"
!gdown $url
from datasets import load_dataset
ds = load_dataset("medical_dialog", "zh", data_dir="./")
```
## Expected results
Faster load time
## Actual results
`Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]`
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
@vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten
|
closed
|
https://github.com/huggingface/datasets/issues/4122
| 2022-04-07T14:00:51
| 2022-04-08T16:20:51
| 2022-04-08T16:20:51
|
{
"login": "nbroad1881",
"id": 24982805,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,196,000,018
| 4,121
|
datasets.load_metric can not load a local metirc
|
## Describe the bug
No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins...
## Steps to reproduce the bug
```python
metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
metric = load_metric(path='bleu')
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py
metric = load_metric(path='./blue/bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
```
## Expected results
I do read the docs [here](https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local.
## Actual results
> metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
> ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module>
----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
817 if data_files is None and data_dir is not None:
818 data_files = os.path.join(data_dir, "**")
--> 819
820 self.name = name
821 self.revision = revision
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
639 self,
640 path: str,
--> 641 download_config: Optional[DownloadConfig] = None,
642 download_mode: Optional[DownloadMode] = None,
643 dynamic_modules_path: Optional[str] = None,
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
297 token = hf_api.HfFolder.get_token()
298 if token:
--> 299 headers["authorization"] = f"Bearer {token}"
300 return headers
301
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
604 def _resumable_file_manager():
605 with open(incomplete_path, "a+b") as f:
--> 606 yield f
607
608 temp_file_manager = _resumable_file_manager
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.7.13
- PyArrow version: 7.0.0
- Pandas version: 1.3.4
Any advice would be appreciated.
|
closed
|
https://github.com/huggingface/datasets/issues/4121
| 2022-04-07T12:48:56
| 2023-01-18T14:30:46
| 2022-04-07T13:53:27
|
{
"login": "SadGare",
"id": 51749469,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,195,887,430
| 4,120
|
Representing dictionaries (json) objects as features
|
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442).
For instance:
```
sample1 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
}}
sample2 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
}}
sample3 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
"d": {"id": 3, "text": "text4"},
}}
```
the `nps` field cannot be represented as a Feature while maintaining its original structure.
@lhoestq suggested to add JSON as a new feature type, which will solve this problem.
It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
|
open
|
https://github.com/huggingface/datasets/issues/4120
| 2022-04-07T11:07:41
| 2022-04-07T11:07:41
| null |
{
"login": "yanaiela",
"id": 8031035,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,195,641,298
| 4,119
|
Hotfix failing CI tests on Windows
|
This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815
|
closed
|
https://github.com/huggingface/datasets/pull/4119
| 2022-04-07T07:38:46
| 2022-04-07T09:47:24
| 2022-04-07T07:57:13
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,195,638,944
| 4,118
|
Failing CI tests on Windows
|
## Describe the bug
Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
|
closed
|
https://github.com/huggingface/datasets/issues/4118
| 2022-04-07T07:36:25
| 2022-04-07T07:57:13
| 2022-04-07T07:57:13
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,195,552,406
| 4,117
|
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
|
## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metric
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.8.9
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Huggingface-hub: 0.5.0
- Transformers: 4.18.0
Thank you in advance.
|
closed
|
https://github.com/huggingface/datasets/issues/4117
| 2022-04-07T05:52:36
| 2024-05-07T09:24:35
| 2022-04-19T15:36:35
|
{
"login": "arymbe",
"id": 4567991,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,194,926,459
| 4,116
|
Pretty print dataset info files
|
Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.
(suggested by @julien-c)
This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea.
`src/datasets/info.py` is the only relevant file for reviewers.
|
closed
|
https://github.com/huggingface/datasets/pull/4116
| 2022-04-06T17:40:48
| 2022-04-08T11:28:01
| 2022-04-08T11:21:53
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,194,907,555
| 4,115
|
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
|
**Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.
**Describe the solution you'd like**
maybe have an option `ignore` or something .gitignore style
`dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")`
**Describe alternatives you've considered**
Could filter out manually
|
closed
|
https://github.com/huggingface/datasets/issues/4115
| 2022-04-06T17:29:43
| 2022-06-01T13:04:16
| 2022-06-01T13:04:16
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,194,855,345
| 4,114
|
Allow downloading just some columns of a dataset
|
**Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download some columns of a dataset, such as doing
```python
load_dataset("huggan/wikiart",columns=["artist", "genre"])
```
Although this might make things a bit complicated in terms of local caching of datasets.
|
open
|
https://github.com/huggingface/datasets/issues/4114
| 2022-04-06T16:38:46
| 2025-02-17T15:10:56
| null |
{
"login": "osanseviero",
"id": 7246357,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,194,843,532
| 4,113
|
Multiprocessing with FileLock fails in python 3.9
|
On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.
This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.
Let's see if we can fix this and have a CI that runs on 3.9.
cc @mariosasko @julien-c
|
closed
|
https://github.com/huggingface/datasets/issues/4113
| 2022-04-06T16:27:09
| 2022-11-28T11:49:14
| 2022-11-28T11:49:14
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,194,752,765
| 4,112
|
ImageFolder with Grayscale images dataset
|
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__
return self._getitem(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem
formatted_output = format_table(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested
mapped = [
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested
return function(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
```
I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well):
```python
train_dataset = load_dataset("imagefolder", data_dir="data/train")
train_dataset = train_dataset["train"]
test_dataset = load_dataset("imagefolder", data_dir="data/test")
test_dataset = test_dataset["train"]
val_dataset = load_dataset("imagefolder", data_dir="data/val")
val_dataset = val_dataset["train"]
dataset = DatasetDict({
"train": train_dataset,
"val": val_dataset,
"test": test_dataset
})
dataset.push_to_hub("ChainYo/rvl-cdip")
```
Now here is the code I am using to get the dataset and prepare it for training:
```python
img_size = 512
batch_size = 128
normalize = [(0.5), (0.5)]
data_dir = "ChainYo/rvl-cdip"
dataset = load_dataset(data_dir, split="train")
transforms = transforms.Compose([
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize(*normalize)
])
transformed_dataset = dataset.with_transform(transforms)
transformed_dataset.set_format(type="torch", device="cuda")
train_dataloader = torch.utils.data.DataLoader(
transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True
)
```
But this get me the error above. I don't understand why it's doing this kind of weird thing?
Do I need to map something on the dataset? Something like this:
```python
labels = dataset.features["label"].names
num_labels = dataset.features["label"].num_classes
def preprocess_data(examples):
images = [ex.convert("RGB") for ex in examples["image"]]
labels = [ex for ex in examples["label"]]
return {"images": images, "labels": labels}
features = Features({
"images": Image(decode=True, id=None),
"labels": ClassLabel(num_classes=num_labels, names=labels)
})
decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100)
```
|
closed
|
https://github.com/huggingface/datasets/issues/4112
| 2022-04-06T15:10:00
| 2022-04-22T10:21:53
| 2022-04-22T10:21:52
|
{
"login": "chainyo",
"id": 50595514,
"type": "User"
}
|
[] | false
|
[] |
1,194,660,699
| 4,111
|
Update security policy
| null |
closed
|
https://github.com/huggingface/datasets/pull/4111
| 2022-04-06T13:59:51
| 2022-04-07T09:46:30
| 2022-04-07T09:40:27
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,194,581,375
| 4,110
|
Matthews Correlation Metric Card
| null |
closed
|
https://github.com/huggingface/datasets/pull/4110
| 2022-04-06T12:59:35
| 2022-05-03T13:43:17
| 2022-05-03T13:36:13
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,194,579,257
| 4,109
|
Add Spearmanr Metric Card
| null |
closed
|
https://github.com/huggingface/datasets/pull/4109
| 2022-04-06T12:57:53
| 2022-05-03T16:50:26
| 2022-05-03T16:43:37
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,194,578,584
| 4,108
|
Perplexity Speedup
|
This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values).
- If the values are not correct, can you help me find the error?
- If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf`
Future:
- `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later.
|
closed
|
https://github.com/huggingface/datasets/pull/4108
| 2022-04-06T12:57:21
| 2022-04-20T13:00:54
| 2022-04-20T12:54:42
|
{
"login": "emibaylor",
"id": 27527747,
"type": "User"
}
|
[] | true
|
[] |
1,194,484,885
| 4,107
|
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
|
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error:
Status code: 400
Exception: ArrowInvalid
Message: Exceeded maximum rows
When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error*
Am I the one who added this dataset ? Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4107
| 2022-04-06T11:37:15
| 2022-04-08T07:13:07
| 2022-04-06T14:39:55
|
{
"login": "Pavithree",
"id": 23344465,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,194,393,892
| 4,106
|
Support huggingface_hub 0.5
|
Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik
|
closed
|
https://github.com/huggingface/datasets/pull/4106
| 2022-04-06T10:15:25
| 2022-04-08T10:28:43
| 2022-04-08T10:22:23
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,194,297,119
| 4,105
|
push to hub fails with huggingface-hub 0.5.0
|
## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The dataset is successfully uploaded
## Actual results
An error validation is raised:
```bash
if repo_id and (name or organization):
> raise ValueError(
"Only pass `repo_id` and leave deprecated `name` and "
"`organization` to be None."
E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- `huggingface-hub`: 0.5
- Platform: macOS
- Python version: 3.8.12
- PyArrow version: 6.0.0
cc @adrinjalali
|
closed
|
https://github.com/huggingface/datasets/issues/4105
| 2022-04-06T08:59:57
| 2022-04-13T14:30:47
| 2022-04-13T14:30:47
|
{
"login": "frascuchon",
"id": 2518789,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,194,072,966
| 4,104
|
Add time series data - stock market
|
## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem

|
open
|
https://github.com/huggingface/datasets/issues/4104
| 2022-04-06T05:46:58
| 2024-07-21T16:54:30
| null |
{
"login": "rozeappletree",
"id": 45640029,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
1,193,987,104
| 4,103
|
Add the `GSM8K` dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4103
| 2022-04-06T04:07:52
| 2022-04-12T15:38:28
| 2022-04-12T10:21:16
|
{
"login": "jon-tow",
"id": 41410219,
"type": "User"
}
|
[] | true
|
[] |
1,193,616,722
| 4,102
|
[hub] Fix `api.create_repo` call?
| null |
closed
|
https://github.com/huggingface/datasets/pull/4102
| 2022-04-05T19:21:52
| 2023-09-24T10:01:14
| 2022-04-12T08:41:46
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[] | true
|
[] |
1,193,399,204
| 4,101
|
How can I download only the train and test split for full numbers using load_dataset()?
|
How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
|
open
|
https://github.com/huggingface/datasets/issues/4101
| 2022-04-05T16:00:15
| 2022-04-06T13:09:01
| null |
{
"login": "Nakkhatra",
"id": 64383902,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,193,393,959
| 4,100
|
Improve RedCaps dataset card
|
This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown)
|
closed
|
https://github.com/huggingface/datasets/pull/4100
| 2022-04-05T15:57:14
| 2022-04-13T14:08:54
| 2022-04-13T14:02:26
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,193,253,768
| 4,099
|
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
|
## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
|
closed
|
https://github.com/huggingface/datasets/issues/4099
| 2022-04-05T14:42:38
| 2022-04-06T06:37:44
| 2022-04-06T06:35:54
|
{
"login": "andreybond",
"id": 20210017,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,193,245,522
| 4,098
|
Proposing WikiSplit metric card
|
Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile:
|
closed
|
https://github.com/huggingface/datasets/pull/4098
| 2022-04-05T14:36:34
| 2022-10-11T09:10:21
| 2022-04-05T15:42:28
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,193,205,751
| 4,097
|
Updating FrugalScore metric card
|
removing duplicate paragraph
|
closed
|
https://github.com/huggingface/datasets/pull/4097
| 2022-04-05T14:09:24
| 2022-04-05T15:07:35
| 2022-04-05T15:01:46
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,193,165,229
| 4,096
|
Add support for streaming Zarr stores for hosted datasets
|
**Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily.
**Describe the solution you'd like**
A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec.
**Describe alternatives you've considered**
Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.
Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
|
closed
|
https://github.com/huggingface/datasets/issues/4096
| 2022-04-05T13:38:32
| 2023-12-07T09:01:49
| 2022-04-21T08:12:58
|
{
"login": "jacobbieker",
"id": 7170359,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,192,573,353
| 4,095
|
fix typo in rename_column error message
|
I feel bad submitting such a tiny change as a PR but it confused me today 😄
|
closed
|
https://github.com/huggingface/datasets/pull/4095
| 2022-04-05T03:55:56
| 2022-04-05T08:54:46
| 2022-04-05T08:45:53
|
{
"login": "hunterlang",
"id": 680821,
"type": "User"
}
|
[] | true
|
[] |
1,192,534,414
| 4,094
|
Helo Mayfrends
|
## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
closed
|
https://github.com/huggingface/datasets/issues/4094
| 2022-04-05T02:42:57
| 2022-04-05T07:16:42
| 2022-04-05T07:16:42
|
{
"login": "Budigming",
"id": 102933353,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
1,192,523,161
| 4,093
|
elena-soare/crawled-ecommerce: missing dataset
|
elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
closed
|
https://github.com/huggingface/datasets/issues/4093
| 2022-04-05T02:25:19
| 2022-04-12T09:34:53
| 2022-04-12T09:34:53
|
{
"login": "seevaratnam",
"id": 17519354,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,192,499,903
| 4,092
|
Fix dataset `amazon_us_reviews` metadata - 4/4/2022
|
Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets.
|
closed
|
https://github.com/huggingface/datasets/pull/4092
| 2022-04-05T01:39:45
| 2022-04-08T12:35:41
| 2022-04-08T12:29:31
|
{
"login": "trentonstrong",
"id": 191985,
"type": "User"
}
|
[] | true
|
[] |
1,192,023,855
| 4,091
|
Build a Dataset One Example at a Time Without Loading All Data Into Memory
|
**Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**
**Describe the solution you'd like**
I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.
```
# Initialize an empty Dataset, possibly from a known schema.
dataset = Dataset()
# Read in examples one by one using a custom data streamer.
for example_dict in custom_example_dict_streamer("/path/to/raw/data"):
# Add this example to the dict but do not store it in memory.
dataset.add_item(example_dict)
# Save the final dataset to disk as an Arrow-backed dataset.
dataset.save_to_disk("/path/to/dataset")
...
# I'd like to be able to later `load_from_disk` and use the loaded Dataset
# just like any other memory-mapped pyarrow-backed HuggingFace dataset...
loaded_dataset = Dataset.load_from_disk("/path/to/dataset")
loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"])
dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)
...
```
**Describe alternatives you've considered**
I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.
Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!
|
closed
|
https://github.com/huggingface/datasets/issues/4091
| 2022-04-04T16:19:24
| 2022-04-20T14:31:00
| 2022-04-20T14:31:00
|
{
"login": "aravind-tonita",
"id": 99340348,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,191,956,734
| 4,090
|
Avoid writing empty license files
|
This PR avoids the creation of empty `LICENSE` files.
|
closed
|
https://github.com/huggingface/datasets/pull/4090
| 2022-04-04T15:23:37
| 2022-04-07T12:46:45
| 2022-04-07T12:40:43
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,191,915,196
| 4,089
|
Create metric card for Frugal Score
|
Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know!
|
closed
|
https://github.com/huggingface/datasets/pull/4089
| 2022-04-04T14:53:49
| 2022-04-05T14:14:46
| 2022-04-05T14:06:50
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,191,901,172
| 4,088
|
Remove unused legacy Beam utils
|
This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204
|
closed
|
https://github.com/huggingface/datasets/pull/4088
| 2022-04-04T14:43:51
| 2022-04-05T15:23:27
| 2022-04-05T15:17:41
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,191,819,805
| 4,087
|
Fix BeamWriter output Parquet file
|
Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size.
- fixes `parquet_to_arrow` function
|
closed
|
https://github.com/huggingface/datasets/pull/4087
| 2022-04-04T13:46:50
| 2022-04-05T15:00:40
| 2022-04-05T14:54:48
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,191,373,374
| 4,086
|
Dataset viewer issue for McGill-NLP/feedbackQA
|
## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4086
| 2022-04-04T07:27:20
| 2022-04-04T22:29:53
| 2022-04-04T08:01:45
|
{
"login": "cslizc",
"id": 54827718,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,190,621,345
| 4,085
|
datasets.set_progress_bar_enabled(False) not working in datasets v2
|
## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled
## Environment info
datasets version 2
|
closed
|
https://github.com/huggingface/datasets/issues/4085
| 2022-04-02T12:40:10
| 2022-09-17T02:18:03
| 2022-04-04T06:44:34
|
{
"login": "virilo",
"id": 3381112,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,190,060,415
| 4,084
|
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
|
## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
import tensorflow as tf
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset('glue', 'mrpc', split='train')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
```
This is the same code on Huggingface.co
## Actual results
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
## Environment info
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyArrow version: 6.0.0
- Pandas version: 1.4.1
>
|
closed
|
https://github.com/huggingface/datasets/issues/4084
| 2022-04-01T17:02:47
| 2022-04-04T07:24:37
| 2022-04-04T07:21:31
|
{
"login": "blackhat-coder",
"id": 57095771,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.