id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,244,839,185
| 4,391
|
Refactor column mappings for question answering datasets
|
This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.
cc @sashavor
|
closed
|
https://github.com/huggingface/datasets/pull/4391
| 2022-05-23T09:13:14
| 2022-05-24T12:57:00
| 2022-05-24T12:48:48
|
{
"login": "lewtun",
"id": 26859204,
"type": "User"
}
|
[] | true
|
[] |
1,244,835,877
| 4,390
|
Fix metadata validation
|
Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility.
|
closed
|
https://github.com/huggingface/datasets/pull/4390
| 2022-05-23T09:11:20
| 2022-06-01T09:27:52
| 2022-06-01T09:19:25
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,244,693,690
| 4,389
|
Fix bug in gem dataset for wiki_auto_asset_turk config
|
This PR fixes some URLs.
Fix #4386.
|
closed
|
https://github.com/huggingface/datasets/pull/4389
| 2022-05-23T07:19:49
| 2022-05-23T10:38:26
| 2022-05-23T10:29:55
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,244,645,158
| 4,388
|
Set builder name from module instead of class
|
Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381.
|
closed
|
https://github.com/huggingface/datasets/pull/4388
| 2022-05-23T06:26:35
| 2022-05-25T05:24:43
| 2022-05-25T05:16:15
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,244,147,817
| 4,387
|
device/google/accessory/adk2012 - Git at Google
|
"git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012
|
closed
|
https://github.com/huggingface/datasets/issues/4387
| 2022-05-22T04:57:19
| 2022-05-23T06:36:27
| 2022-05-23T06:36:27
|
{
"login": "Aeckard45",
"id": 87345839,
"type": "User"
}
|
[] | false
|
[] |
1,243,965,532
| 4,386
|
Bug for wiki_auto_asset_turk from GEM
|
## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare
self._download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators
dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download
downloaded_path_or_paths = map_nested(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested
mapped = [
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested
return function(data_struct)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path
output_path = get_from_cache(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig
```
|
closed
|
https://github.com/huggingface/datasets/issues/4386
| 2022-05-21T12:31:30
| 2022-05-24T05:55:52
| 2022-05-23T10:29:55
|
{
"login": "StevenTang1998",
"id": 37647985,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,243,921,287
| 4,385
|
Test dill
|
Regression test for future releases of `dill`.
Related to #4379.
|
closed
|
https://github.com/huggingface/datasets/pull/4385
| 2022-05-21T08:57:43
| 2022-05-25T08:30:13
| 2022-05-25T08:21:48
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,243,919,748
| 4,384
|
Refactor download
|
This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities
- abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower
- architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture
Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.
As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860
- After an extension, a circular import is found
- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:
```
ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'.
tests/conftest.py:12: in <module>
import datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>
from .arrow_dataset import Dataset, concatenate_datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module>
from . import config
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module>
from .utils.logging import get_logger
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module>
from .download_manager import DownloadConfig, DownloadManager, DownloadMode
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module>
from .py_utils import NestedDataStructure, map_nested, size_str
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module>
if config.DILL_VERSION < version.parse("0.3.5"):
E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'
```
Imports:
- datasets
- Dataset: lower level than datasets
- config: lower level than Dataset
- logger: lower level than config
- DownloadManager: !!! HIGHER level of abstraction than logger!!
Why when importing logger we require importing DownloadManager?!?
- Logically, it does not make sense
- This is due to an error in the design/architecture of our library:
- To import the logger, we need to import it from `.utils.logging`
- To import `.utils.logging` we need to import `.utils`
- The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!
When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first.
|
closed
|
https://github.com/huggingface/datasets/pull/4384
| 2022-05-21T08:49:24
| 2022-05-25T10:52:02
| 2022-05-25T10:43:43
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,243,856,981
| 4,383
|
L
|
## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
|
closed
|
https://github.com/huggingface/datasets/issues/4383
| 2022-05-21T03:47:58
| 2022-05-21T19:20:13
| 2022-05-21T19:20:13
|
{
"login": "AronCodes21",
"id": 99847861,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,243,839,783
| 4,382
|
First time trying
|
## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
closed
|
https://github.com/huggingface/datasets/issues/4382
| 2022-05-21T02:15:18
| 2022-05-21T19:20:44
| 2022-05-21T19:20:44
|
{
"login": "Aeckard45",
"id": 87345839,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
1,243,478,863
| 4,381
|
Bug in caching 2 datasets both with the same builder class name
|
## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("mteb/mtop_intent", "en")
print(dataset['train'][0])
dataset = datasets.load_dataset("mteb/mtop_domain", "en")
print(dataset['train'][0])
```
## Expected results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}
```
## Actual results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1
- Platform: macOS-12.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
closed
|
https://github.com/huggingface/datasets/issues/4381
| 2022-05-20T18:18:03
| 2022-06-02T08:18:37
| 2022-05-25T05:16:15
|
{
"login": "NouamaneTazi",
"id": 29777165,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,243,183,054
| 4,380
|
Pin dill
|
Hotfix #4379.
CC: @sgugger
|
closed
|
https://github.com/huggingface/datasets/pull/4380
| 2022-05-20T13:54:19
| 2022-06-13T10:03:52
| 2022-05-20T16:33:04
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,243,175,854
| 4,379
|
Latest dill release raises exception
|
## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
> raise self._value
E TypeError: '>' not supported between instances of 'NoneType' and 'float'
```
|
closed
|
https://github.com/huggingface/datasets/issues/4379
| 2022-05-20T13:48:36
| 2022-05-21T15:53:26
| 2022-05-20T17:06:27
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,242,935,373
| 4,378
|
Tidy up license metadata for google_wellformed_query, newspop, sick
|
Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now!
|
closed
|
https://github.com/huggingface/datasets/pull/4378
| 2022-05-20T10:16:12
| 2022-05-24T13:50:23
| 2022-05-24T13:10:27
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[] | true
|
[] |
1,242,746,186
| 4,377
|
Fix checksum and bug in irc_disentangle dataset
|
There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
|
closed
|
https://github.com/huggingface/datasets/pull/4377
| 2022-05-20T07:29:28
| 2022-05-20T09:34:36
| 2022-05-20T09:26:32
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,242,218,144
| 4,376
|
irc_disentagle viewer error
|
the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807
|
closed
|
https://github.com/huggingface/datasets/issues/4376
| 2022-05-19T19:15:16
| 2023-01-12T16:56:13
| 2022-06-02T08:20:00
|
{
"login": "labouz",
"id": 25671683,
"type": "User"
}
|
[] | false
|
[] |
1,241,921,147
| 4,375
|
Support DataLoader with num_workers > 0 in streaming mode
|
### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingface/datasets/issues/3951
- `fsspec` doesn't work out of the box in subprocesses
### Solution in this PR
I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`.
I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method
I also had to do a few changes the patching that enable streaming in dataset scripts:
- the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated
- I improved it to also check for renamed modules or attributes (ex: pandas vs pd)
- I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming
- I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing
### A few details regarding `fsspec` in multiprocessing
From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 :
> Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test!
> If any async instance has been created, the newly forked processes must:
> 1. discard references to locks, threads and event loops and make new ones
> 2. not use any async fsspec instances from the parent process
> 3. clear all class instance caches
Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process.
Fix https://github.com/huggingface/datasets/issues/3950
Fix https://github.com/huggingface/datasets/issues/3951
TODO:
- [x] fix tests
|
closed
|
https://github.com/huggingface/datasets/pull/4375
| 2022-05-19T15:00:31
| 2022-07-04T16:05:14
| 2022-06-10T20:47:27
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,241,860,535
| 4,374
|
extremely slow processing when using a custom dataset
|
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
|
closed
|
https://github.com/huggingface/datasets/issues/4374
| 2022-05-19T14:18:05
| 2023-07-25T15:07:17
| 2023-07-25T15:07:16
|
{
"login": "StephennFernandes",
"id": 32235549,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "question",
"color": "d876e3"
}
] | false
|
[] |
1,241,769,310
| 4,373
|
Remove links in docs to old dataset viewer
|
Remove the links in the docs to the no longer maintained dataset viewer.
|
closed
|
https://github.com/huggingface/datasets/pull/4373
| 2022-05-19T13:24:39
| 2022-05-20T15:24:28
| 2022-05-20T15:16:05
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,241,703,826
| 4,372
|
Check if dataset features match before push in `DatasetDict.push_to_hub`
|
Fix #4211
|
closed
|
https://github.com/huggingface/datasets/pull/4372
| 2022-05-19T12:32:30
| 2022-05-20T15:23:36
| 2022-05-20T15:15:30
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,241,500,906
| 4,371
|
Add missing language tags for udhr dataset
|
Related to #4362.
|
closed
|
https://github.com/huggingface/datasets/pull/4371
| 2022-05-19T09:34:10
| 2022-06-08T12:03:24
| 2022-05-20T09:43:10
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,240,245,642
| 4,369
|
Add redirect to dataset script in the repo structure page
|
Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page.
|
closed
|
https://github.com/huggingface/datasets/pull/4369
| 2022-05-18T17:05:33
| 2022-05-19T08:19:01
| 2022-05-19T08:10:51
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,240,064,860
| 4,368
|
Add long answer candidates to natural questions dataset
|
This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova
|
closed
|
https://github.com/huggingface/datasets/pull/4368
| 2022-05-18T14:35:42
| 2022-07-26T20:30:41
| 2022-07-26T20:18:42
|
{
"login": "seirasto",
"id": 4257308,
"type": "User"
}
|
[] | true
|
[] |
1,240,011,602
| 4,367
|
Remove config names as yaml keys
|
Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like languages or task_ids
|
closed
|
https://github.com/huggingface/datasets/pull/4367
| 2022-05-18T13:59:24
| 2022-05-20T09:35:26
| 2022-05-20T09:27:19
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,239,534,165
| 4,366
|
TypeError: __init__() missing 1 required positional argument: 'scheme'
|
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
|
closed
|
https://github.com/huggingface/datasets/issues/4366
| 2022-05-18T07:17:29
| 2022-05-18T16:36:22
| 2022-05-18T16:36:21
|
{
"login": "jffgitt",
"id": 99231535,
"type": "User"
}
|
[
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false
|
[] |
1,239,109,943
| 4,365
|
Remove dots in config names
|
20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like `languages` or `task_ids`
- they contain valid config names (no invalid characters `<>:/\|?*.`)
|
closed
|
https://github.com/huggingface/datasets/pull/4365
| 2022-05-17T20:12:57
| 2023-09-24T10:02:53
| 2022-05-18T13:59:41
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,238,976,106
| 4,364
|
Support complex feature types as `features` in packaged loaders
|
This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range.
Fix https://github.com/huggingface/datasets/issues/4210
This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2
TODO:
* [x] tests
|
closed
|
https://github.com/huggingface/datasets/pull/4364
| 2022-05-17T17:53:23
| 2022-05-31T12:26:23
| 2022-05-31T12:16:32
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,238,897,652
| 4,363
|
The dataset preview is not available for this split.
|
I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it?
```
Status code: 400
Exception: AttributeError
Message: 'NoneType' object has no attribute 'split'
```
|
closed
|
https://github.com/huggingface/datasets/issues/4363
| 2022-05-17T16:34:43
| 2022-06-08T12:32:10
| 2022-06-08T09:26:56
|
{
"login": "roholazandie",
"id": 7584674,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,238,680,112
| 4,362
|
Update dataset_infos for UDHN/udhr dataset
|
Checksum update to `udhr` for issue #4361
|
closed
|
https://github.com/huggingface/datasets/pull/4362
| 2022-05-17T13:52:59
| 2022-06-08T19:20:11
| 2022-06-08T19:11:21
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[] | true
|
[] |
1,238,671,931
| 4,361
|
`udhr` doesn't load, dataset checksum mismatch
|
## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2273633,
"checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2107471,
"checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5"
}
}
```
size + checksum regenerated from current source files:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json
(hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s]
Dataset Infos file saved at dataset_infos.json
Test successful.
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2389690,
"checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2215441,
"checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe"
}
}
(hfdev) leon@blade:~/datasets/datasets/udhr$
```
--- is unicode.org a sustainable hosting solution for this dataset?
## Steps to reproduce the bug
```python
from datasets import load_dataset
udhr = load_dataset("udhr")
```
## Expected results
That a Dataset object containing the UDHR data will be returned.
## Actual results
```
>>> d = load_dataset('udhr')
Using custom data configuration default
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare
self._download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare
verify_checksums(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip']
>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7
- Platform: Linux Ubuntu 20.04
- Python version: 3.9.12
- PyArrow version: 8.0.0
|
closed
|
https://github.com/huggingface/datasets/issues/4361
| 2022-05-17T13:47:09
| 2022-06-08T19:11:21
| 2022-06-08T19:11:21
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,237,239,096
| 4,360
|
Fix example in opus_ubuntu, Add license info
|
This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type
|
closed
|
https://github.com/huggingface/datasets/pull/4360
| 2022-05-16T14:22:28
| 2022-06-01T13:06:07
| 2022-06-01T12:57:09
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[] | true
|
[] |
1,237,149,578
| 4,359
|
Fix Version equality
|
I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (False, False)
In [4]: Version("1.0.0") != 5, Version("1.0.0") != None
Out[4]: (True, True)
```
Note I found this issue when `doc-builder` tried to compare:
```python
if param.default != inspect._empty
```
where `param.default` is an instance of `Version`.
|
closed
|
https://github.com/huggingface/datasets/pull/4359
| 2022-05-16T13:19:26
| 2022-05-24T16:25:37
| 2022-05-24T16:17:14
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,237,147,692
| 4,358
|
Missing dataset tags and sections in some dataset cards
|
Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citation Information` but it is empty.
- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags
- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'
- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty
- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.
- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.
- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.
- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sms_spam**: `Data Instances` and`Data Splits` are empty.
- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
|
open
|
https://github.com/huggingface/datasets/issues/4358
| 2022-05-16T13:18:16
| 2022-05-30T15:36:52
| null |
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,237,037,069
| 4,357
|
Fix warning in push_to_hub
|
Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
```
|
closed
|
https://github.com/huggingface/datasets/pull/4357
| 2022-05-16T11:50:17
| 2022-05-16T15:18:49
| 2022-05-16T15:10:41
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,236,846,308
| 4,356
|
Fix dataset builder default version
|
Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead:
```python
ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')
```
with version "0.0.0" instead of "2.0.0".
See as a counter-example, when the config is present in `BUILDER_CONFIGS`:
```python
ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')
```
with correct version "2.0.0", as set in the custom config class.
The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class.
This PR:
- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).
- Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder.
|
closed
|
https://github.com/huggingface/datasets/pull/4356
| 2022-05-16T09:05:10
| 2022-05-30T13:56:58
| 2022-05-30T13:47:54
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,236,797,490
| 4,355
|
Fix warning in upload_file
|
Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
```
|
closed
|
https://github.com/huggingface/datasets/pull/4355
| 2022-05-16T08:21:31
| 2022-05-16T11:28:02
| 2022-05-16T11:19:57
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,236,404,383
| 4,354
|
Problems with WMT dataset
|
## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore.
## Steps to reproduce the bug
```shell
>>> import datasets
>>> a = datasets.translate.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'translate'
>>> a = datasets.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'wmt'
```
## Expected results
To load WMT15 with given data-sources.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
closed
|
https://github.com/huggingface/datasets/issues/4354
| 2022-05-15T20:58:26
| 2022-07-11T14:54:02
| 2022-07-11T14:54:01
|
{
"login": "eldarkurtic",
"id": 8884008,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
1,236,092,176
| 4,353
|
Don't strip proceeding hyphen
|
Closes #4320.
|
closed
|
https://github.com/huggingface/datasets/pull/4353
| 2022-05-14T18:25:29
| 2022-05-16T18:51:38
| 2022-05-16T13:52:11
|
{
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
}
|
[] | true
|
[] |
1,236,086,170
| 4,352
|
When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way
|
## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.
It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.
## Steps to reproduce the bug
I don't have explicit code to repro the bug, but ill show an example
Code prior to the fix:
```python
def preprocess(examples):
# returns an encoded data dict with keys that match the features, but the types do not match
...
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['audit_type'].unique().tolist()
features = Features({
'image': Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Value(dtype='int64'))),
'bbox': Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)
```
The Features set that fixed it:
```python
features = Features({
'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),
'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
```
The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not.
## Expected results
Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.
## Actual results
Specify the actual results or traceback.
Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious
Example errors:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
datasets version: 2.1.0
Platform: macOS-12.2.1-arm64-arm-64bit
Python version: 3.9.12
PyArrow version: 6.0.1
Pandas version: 1.4.2
|
open
|
https://github.com/huggingface/datasets/issues/4352
| 2022-05-14T17:55:15
| 2022-05-16T15:09:17
| null |
{
"login": "plamb-viso",
"id": 99206017,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,235,950,209
| 4,351
|
Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems
|
**Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.
**Describe the solution you'd like**
I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm.
**Describe alternatives you've considered**
- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/).
|
closed
|
https://github.com/huggingface/datasets/issues/4351
| 2022-05-14T11:30:42
| 2022-12-14T18:22:59
| 2022-12-14T18:22:59
|
{
"login": "Rexhaif",
"id": 5154447,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,235,505,104
| 4,350
|
Add a new metric: CTC_Consistency
|
Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test?
|
closed
|
https://github.com/huggingface/datasets/pull/4350
| 2022-05-13T17:31:19
| 2022-05-19T10:23:04
| 2022-05-19T10:23:03
|
{
"login": "YEdenZ",
"id": 92551194,
"type": "User"
}
|
[] | true
|
[] |
1,235,474,765
| 4,349
|
Dataset.map()'s fails at any value of parameter writer_batch_size
|
## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.
I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.
Code I am using is provided below
## Steps to reproduce the bug
I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.
```python
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['label'].unique()
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)
encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)
encoded_dataset.set_format(type="torch")
return encoded_dataset
```
```python
PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False)
def preprocess_data(examples):
directory = os.path.join(FILES_PATH, examples['file_location'])
images_dir = os.path.join(directory, PDF_IMAGE_DIR)
textract_response_path = os.path.join(directory, 'textract.json')
doc_meta_path = os.path.join(directory, 'doc_meta.json')
textract_document = get_textract_document(textract_response_path, doc_meta_path)
images, words, bboxes = get_doc_training_data(images_dir, textract_document)
encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True)
# https://github.com/NielsRogge/Transformers-Tutorials/issues/36
encoded_inputs["image"] = np.array(encoded_inputs["image"])
encoded_inputs["label"] = examples['label_id']
return encoded_inputs
```
## Expected results
My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.
## Actual results
If writer_batch_size is set to a value less than the number of rows, I get either:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
or simply
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
If it is greater than the number of rows, i get the `zsh: killed` error above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
|
closed
|
https://github.com/huggingface/datasets/issues/4349
| 2022-05-13T16:55:12
| 2022-06-02T12:51:11
| 2022-05-14T15:08:08
|
{
"login": "plamb-viso",
"id": 99206017,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,235,432,976
| 4,348
|
`inspect` functions can't fetch dataset script from the Hub
|
The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
```
|
closed
|
https://github.com/huggingface/datasets/issues/4348
| 2022-05-13T16:08:26
| 2022-06-09T10:26:06
| 2022-06-09T10:26:06
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,235,318,064
| 4,347
|
Support remote cache_dir
|
This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4347
| 2022-05-13T14:26:35
| 2022-05-25T16:35:23
| 2022-05-25T16:27:03
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,235,067,062
| 4,346
|
GH Action to build documentation never ends
|
## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow.
|
closed
|
https://github.com/huggingface/datasets/issues/4346
| 2022-05-13T10:44:44
| 2022-05-13T11:22:00
| 2022-05-13T11:22:00
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,235,062,787
| 4,345
|
Fix never ending GH Action to build documentation
|
There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346.
|
closed
|
https://github.com/huggingface/datasets/pull/4345
| 2022-05-13T10:40:10
| 2022-05-13T11:29:43
| 2022-05-13T11:22:00
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,234,882,542
| 4,344
|
Fix docstring in DatasetDict::shuffle
|
I think due to #1626, the docstring contained this error ever since `seed` was added.
|
closed
|
https://github.com/huggingface/datasets/pull/4344
| 2022-05-13T08:06:00
| 2022-05-25T09:23:43
| 2022-05-24T15:35:21
|
{
"login": "felixdivo",
"id": 4403130,
"type": "User"
}
|
[] | true
|
[] |
1,234,864,168
| 4,343
|
Metrics documentation is not accessible in the datasets doc UI
|
**Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
|
closed
|
https://github.com/huggingface/datasets/issues/4343
| 2022-05-13T07:46:30
| 2022-06-03T08:50:25
| 2022-06-03T08:50:25
|
{
"login": "fxmarty",
"id": 9808326,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "Metric discussion",
"color": "d722e8"
}
] | false
|
[] |
1,234,743,765
| 4,342
|
Fix failing CI on Windows for sari and wiki_split metrics
|
This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341.
|
closed
|
https://github.com/huggingface/datasets/pull/4342
| 2022-05-13T05:03:38
| 2022-05-13T05:47:42
| 2022-05-13T05:47:42
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,234,739,703
| 4,341
|
Failing CI on Windows for sari and wiki_split metrics
|
## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594
|
closed
|
https://github.com/huggingface/datasets/issues/4341
| 2022-05-13T04:55:17
| 2022-05-13T05:47:41
| 2022-05-13T05:47:41
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,234,671,025
| 4,340
|
Fix irc_disentangle dataset script
|
updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json
|
closed
|
https://github.com/huggingface/datasets/pull/4340
| 2022-05-13T02:37:57
| 2022-05-24T15:37:30
| 2022-05-24T15:37:29
|
{
"login": "i-am-pad",
"id": 32005017,
"type": "User"
}
|
[] | true
|
[] |
1,234,496,289
| 4,339
|
Dataset loader for the MSLR2022 shared task
|
This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
```
Usage looks like:
```python
>>> ms2 = load_dataset("mslr2022", "ms2", split="validation")
>>> ms2.keys()
dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])
>>> ms2[0].target
'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'
```
I have tested this works with the following command:
```bash
datasets-cli test datasets/mslr2022 --save_infos --all_configs
```
However I have having a little trouble generating the dummy data
```bash
datasets-cli dummy_data datasets/mslr2022 --auto_generate
```
errors out with the following stack trace:
```
Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.
Traceback (most recent call last):
File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module>
load_entry_point('datasets', 'console_scripts', 'datasets-cli')()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run
keep_uncompressed=self._keep_uncompressed,
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples
reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2
```
I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:
```
The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).
It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.
```
Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
|
closed
|
https://github.com/huggingface/datasets/pull/4339
| 2022-05-12T21:23:41
| 2022-07-18T17:19:27
| 2022-07-18T16:58:34
|
{
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
}
|
[] | true
|
[] |
1,234,478,851
| 4,338
|
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
|
Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full
|
closed
|
https://github.com/huggingface/datasets/pull/4338
| 2022-05-12T21:02:08
| 2022-05-16T15:51:02
| 2022-05-16T15:42:59
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,470,083
| 4,337
|
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
|
Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR
|
closed
|
https://github.com/huggingface/datasets/pull/4337
| 2022-05-12T20:52:02
| 2022-05-16T16:26:19
| 2022-05-16T16:18:30
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,446,174
| 4,336
|
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment
|
Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment
|
closed
|
https://github.com/huggingface/datasets/pull/4336
| 2022-05-12T20:24:45
| 2022-05-16T16:25:00
| 2022-05-16T16:24:59
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,157,123
| 4,335
|
Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech
|
Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive
|
closed
|
https://github.com/huggingface/datasets/pull/4335
| 2022-05-12T15:28:16
| 2022-05-16T16:31:10
| 2022-05-16T16:23:09
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,103,477
| 4,334
|
Adding eval metadata for billsum
|
Adding eval metadata for billsum
|
closed
|
https://github.com/huggingface/datasets/pull/4334
| 2022-05-12T14:49:08
| 2023-09-24T10:02:46
| 2022-05-12T14:49:24
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,038,705
| 4,333
|
Adding eval metadata for Banking 77
|
Adding eval metadata for Banking 77
|
closed
|
https://github.com/huggingface/datasets/pull/4333
| 2022-05-12T14:05:05
| 2022-05-12T21:03:32
| 2022-05-12T21:03:31
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,021,188
| 4,332
|
Adding eval metadata for arabic speech corpus
|
Adding eval metadata for arabic speech corpus
|
closed
|
https://github.com/huggingface/datasets/pull/4332
| 2022-05-12T13:51:38
| 2022-05-12T21:03:21
| 2022-05-12T21:03:20
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,234,016,110
| 4,331
|
Adding eval metadata to Amazon Polarity
|
Adding eval metadata to Amazon Polarity
|
closed
|
https://github.com/huggingface/datasets/pull/4331
| 2022-05-12T13:47:59
| 2022-05-12T21:03:14
| 2022-05-12T21:03:13
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,233,992,681
| 4,330
|
Adding eval metadata to Allociné dataset
|
Adding eval metadata to Allociné dataset
|
closed
|
https://github.com/huggingface/datasets/pull/4330
| 2022-05-12T13:31:39
| 2022-05-12T21:03:05
| 2022-05-12T21:03:05
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,233,991,207
| 4,329
|
Adding eval metadata for AG News
|
Adding eval metadata for AG News
|
closed
|
https://github.com/huggingface/datasets/pull/4329
| 2022-05-12T13:30:32
| 2022-05-12T21:02:41
| 2022-05-12T21:02:40
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,233,856,690
| 4,328
|
Fix and clean Apache Beam functionality
| null |
closed
|
https://github.com/huggingface/datasets/pull/4328
| 2022-05-12T11:41:07
| 2022-05-24T13:43:11
| 2022-05-24T13:34:32
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,233,840,020
| 4,327
|
`wikipedia` pre-processed datasets
|
## Describe the bug
[Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", "20220301.en")
```
## Expected results
To load the dataset
## Actual results
Takes a very long time to load (after downloading)
After `Downloading data files: 100%`. It takes hours and gets killed.
Tried `wikipedia.simple` and it got processed after ~30mins.
|
closed
|
https://github.com/huggingface/datasets/issues/4327
| 2022-05-12T11:25:42
| 2022-08-31T08:26:57
| 2022-08-31T08:26:57
|
{
"login": "vpj",
"id": 81152,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,233,818,489
| 4,326
|
Fix type hint and documentation for `new_fingerprint`
|
Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.
The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454
for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc).
|
closed
|
https://github.com/huggingface/datasets/pull/4326
| 2022-05-12T11:05:08
| 2022-06-01T13:04:45
| 2022-06-01T12:56:18
|
{
"login": "fxmarty",
"id": 9808326,
"type": "User"
}
|
[] | true
|
[] |
1,233,812,191
| 4,325
|
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
|
### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time.
* https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train
* https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped!
### Owner
Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4325
| 2022-05-12T10:59:08
| 2022-05-13T10:57:15
| 2022-05-13T10:57:02
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
1,233,780,870
| 4,324
|
Support >1 PWC dataset per dataset card
|
**Is your feature request related to a problem? Please describe.**
Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/strombergnlp/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page.
Because the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader.
It's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple.
**Describe the solution you'd like**
I'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets.
**Describe alternatives you've considered**
De-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ...
**Additional context**
Hope that's enough
**Priority**
Low
|
open
|
https://github.com/huggingface/datasets/issues/4324
| 2022-05-12T10:29:07
| 2022-05-13T11:25:29
| null |
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,233,634,928
| 4,323
|
Audio can not find value["bytes"]
|
## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## Expected results
value["bytes"] is not None, so i can make datasets with bytes, not path
## bytes looks like:
blah blah~~
\xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03
blah blah~~
that function not return None
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:2.2.1
- Platform:ubuntu 18.04
- Python version:3.6.9
- PyArrow version:6.0.1
|
closed
|
https://github.com/huggingface/datasets/issues/4323
| 2022-05-12T08:31:58
| 2022-07-07T13:16:08
| 2022-07-07T13:16:08
|
{
"login": "YooSungHyun",
"id": 34292279,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,233,596,947
| 4,322
|
Added stratify option to train_test_split function.
|
This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.
It fixes #3452.
@lhoestq Please review and let me know, if any changes are required.
|
closed
|
https://github.com/huggingface/datasets/pull/4322
| 2022-05-12T08:00:31
| 2022-11-22T14:53:55
| 2022-05-25T20:43:51
|
{
"login": "nandwalritik",
"id": 48522685,
"type": "User"
}
|
[] | true
|
[] |
1,233,273,351
| 4,321
|
Adding dataset enwik8
|
Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗
|
closed
|
https://github.com/huggingface/datasets/pull/4321
| 2022-05-11T23:25:02
| 2022-06-01T14:27:30
| 2022-06-01T14:04:06
|
{
"login": "HallerPatrick",
"id": 22773355,
"type": "User"
}
|
[] | true
|
[] |
1,233,208,864
| 4,320
|
Multi-news dataset loader attempts to strip wrong character from beginning of summaries
|
## Describe the bug
The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`.
I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)).
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
closed
|
https://github.com/huggingface/datasets/issues/4320
| 2022-05-11T21:36:41
| 2022-05-16T13:52:10
| 2022-05-16T13:52:10
|
{
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,232,982,023
| 4,319
|
Adding eval metadata for ade v2
|
Adding metadata to allow evaluation
|
closed
|
https://github.com/huggingface/datasets/pull/4319
| 2022-05-11T17:36:20
| 2022-05-12T13:29:51
| 2022-05-12T13:22:19
|
{
"login": "sashavor",
"id": 14205986,
"type": "User"
}
|
[] | true
|
[] |
1,232,905,488
| 4,318
|
Don't check f.loc in _get_extraction_protocol_with_magic_number
|
`f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)
Fix https://github.com/huggingface/datasets/issues/4310
|
closed
|
https://github.com/huggingface/datasets/pull/4318
| 2022-05-11T16:27:09
| 2022-05-11T16:57:02
| 2022-05-11T16:46:31
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,232,737,401
| 4,317
|
Fix cnn_dailymail (dm stories were ignored)
|
https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged
|
closed
|
https://github.com/huggingface/datasets/pull/4317
| 2022-05-11T14:25:25
| 2022-05-11T16:00:09
| 2022-05-11T15:52:37
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,232,681,207
| 4,316
|
Support passing config_kwargs to CLI run_beam
|
This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
```
|
closed
|
https://github.com/huggingface/datasets/pull/4316
| 2022-05-11T13:53:37
| 2022-05-11T14:36:49
| 2022-05-11T14:28:31
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,232,549,330
| 4,315
|
Fix CLI run_beam namespace
|
Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
```
|
closed
|
https://github.com/huggingface/datasets/pull/4315
| 2022-05-11T12:21:00
| 2022-05-11T13:13:00
| 2022-05-11T13:05:08
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,232,326,726
| 4,314
|
Catch pull error when mirroring
|
Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed.
|
closed
|
https://github.com/huggingface/datasets/pull/4314
| 2022-05-11T09:38:35
| 2022-05-11T12:54:07
| 2022-05-11T12:46:42
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,231,764,100
| 4,313
|
Add API code examples for Builder classes
|
This PR adds API code examples for the Builder classes.
|
closed
|
https://github.com/huggingface/datasets/pull/4313
| 2022-05-10T22:22:32
| 2022-05-12T17:02:43
| 2022-05-12T12:36:57
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "documentation",
"color": "0075ca"
}
] | true
|
[] |
1,231,662,775
| 4,312
|
added TR-News dataset
| null |
closed
|
https://github.com/huggingface/datasets/pull/4312
| 2022-05-10T20:33:00
| 2022-10-03T09:36:45
| 2022-10-03T09:36:45
|
{
"login": "batubayk",
"id": 25901065,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
1,231,369,438
| 4,311
|
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
|
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- raise informative error messages when metadata and images aren't linked correctly:
- when an image is missing a metadata file
- when a metadata file is missing an image
I added some tests for these changes as well
cc @mariosasko
|
closed
|
https://github.com/huggingface/datasets/pull/4311
| 2022-05-10T15:52:15
| 2022-05-10T17:19:42
| 2022-05-10T17:11:47
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,231,319,815
| 4,310
|
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
|
## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# path is the path to parquet files
data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
dataset = load_dataset("parquet", data_files=data_files, streaming=True)
```
## Expected results
A dataset object `datasets.dataset_dict.DatasetDict`
## Actual results
```
AttributeError Traceback (most recent call last)
<command-562086> in <module>
11
12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"}
---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1679 if streaming:
1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token)
-> 1681 return builder_instance.as_streaming_dataset(
1682 split=split,
1683 use_auth_token=use_auth_token,
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)
904 )
905 self._check_manual_download(dl_manager)
--> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
907 # By default, return all splits
908 if split is None:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager)
30 if not self.config.data_files:
31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 32 data_files = dl_manager.download_and_extract(self.config.data_files)
33 if isinstance(data_files, (str, list, tuple)):
34 files = data_files
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)
798
799 def download_and_extract(self, url_or_urls):
--> 800 return self.extract(self.download(url_or_urls))
801
802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)
776
777 def extract(self, path_or_paths):
--> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
779 return urlpaths
780
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
312 num_proc = 1
313 if num_proc <= 1 or len(iterable) <= num_proc:
--> 314 mapped = [
315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
313 if num_proc <= 1 or len(iterable) <= num_proc:
314 mapped = [
--> 315 _single_map_nested((function, obj, types, None, True, None))
316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
317 ]
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
268 else:
--> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
270 if isinstance(data_struct, list):
271 return mapped
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
249 # Singleton first to spare some computation
250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 251 return function(data_struct)
252
253 # Reduce logging to keep things readable in multiprocessing with tqdm
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)
781 def _extract(self, urlpath: str) -> str:
782 urlpath = str(urlpath)
--> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
784 if protocol is None:
785 # no extraction
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token)
371 urlpath, kwargs = urlpath, {}
372 with fsspec.open(urlpath, **kwargs) as f:
--> 373 return _get_extraction_protocol_with_magic_number(f)
374
375
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f)
335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]:
336 """read the magic number from a file-like object and return the compression protocol"""
--> 337 prev_loc = f.loc
338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH)
339 f.seek(prev_loc)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item)
337
338 def __getattr__(self, item):
--> 339 return getattr(self.f, item)
340
341 def __enter__(self):
AttributeError: '_io.BufferedReader' object has no attribute 'loc'
```
## Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
- `fsspec` version: 2021.08.1
- `s3fs` version: 2021.08.1
|
closed
|
https://github.com/huggingface/datasets/issues/4310
| 2022-05-10T15:12:53
| 2022-05-11T16:46:31
| 2022-05-11T16:46:31
|
{
"login": "milmin",
"id": 72745467,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,231,232,935
| 4,309
|
[WIP] Add TEDLIUM dataset
|
Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for continuous testing~~
- [ ] ~~Dummy data tests~~
- [ ] ~~Real data tests~~
- [ ] Create the metadata JSON
- [ ] Close PR and add directly to the Hub under LIUM org
|
closed
|
https://github.com/huggingface/datasets/pull/4309
| 2022-05-10T14:12:47
| 2022-06-17T12:54:40
| 2022-06-17T11:44:01
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | true
|
[] |
1,231,217,783
| 4,308
|
Remove unused multiprocessing args from test CLI
|
Multiprocessing is not used in the test CLI.
|
closed
|
https://github.com/huggingface/datasets/pull/4308
| 2022-05-10T14:02:15
| 2022-05-11T12:58:25
| 2022-05-11T12:50:43
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,231,175,639
| 4,307
|
Add packaged builder configs to the documentation
|
Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.
|
closed
|
https://github.com/huggingface/datasets/pull/4307
| 2022-05-10T13:34:19
| 2022-05-10T14:03:50
| 2022-05-10T13:55:54
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
1,231,137,204
| 4,306
|
`load_dataset` does not work with certain filename.
|
## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
No error.
## Actual results
The val file is loaded as expected, but the train file throws JSON decoding error:
```
╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ <ipython-input-74-97947e92c100>:5 in <module> │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │
│ load_dataset │
│ │
│ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │
│ 1685 │ │
│ 1686 │ # Download and prepare data │
│ ❱ 1687 │ builder_instance.download_and_prepare( │
│ 1688 │ │ download_config=download_config, │
│ 1689 │ │ download_mode=download_mode, │
│ 1690 │ │ ignore_verifications=ignore_verifications, │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │
│ download_and_prepare │
│ │
│ 602 │ │ │ │ │ │ except ConnectionError: │
│ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │
│ 604 │ │ │ │ │ if not downloaded_from_gcs: │
│ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │
│ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │
│ 607 │ │ │ │ │ │ ) │
│ 608 │ │ │ │ │ # Sync info │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │
│ _download_and_prepare │
│ │
│ 691 │ │ │ │
│ 692 │ │ │ try: │
│ 693 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 695 │ │ │ except OSError as e: │
│ 696 │ │ │ │ raise OSError( │
│ 697 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │
│ _prepare_split │
│ │
│ 1148 │ │ │
│ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │
│ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │
│ ❱ 1151 │ │ │ for key, table in logging.tqdm( │
│ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │
│ 1153 │ │ │ ): │
│ 1154 │ │ │ │ writer.write_table(table) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │
│ __iter__ │
│ │
│ 254 │ │
│ 255 │ def __iter__(self): │
│ 256 │ │ try: │
│ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │
│ 258 │ │ │ │ # return super(tqdm...) will not catch exception │
│ 259 │ │ │ │ yield obj │
│ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │
│ __iter__ │
│ │
│ 1180 │ │ # If the bar is disabled, then just walk the iterable │
│ 1181 │ │ # (note: keep this check outside the loop for performance) │
│ 1182 │ │ if self.disable: │
│ ❱ 1183 │ │ │ for obj in iterable: │
│ 1184 │ │ │ │ yield obj │
│ 1185 │ │ │ return │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │
│ son/json.py:90 in _generate_tables │
│ │
│ 87 │ │ │ # If the file is one json object and if we need to look at the list of │
│ 88 │ │ │ if self.config.field is not None: │
│ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │
│ ❱ 90 │ │ │ │ │ dataset = json.load(f) │
│ 91 │ │ │ │ │
│ 92 │ │ │ │ # We keep only the field we are interested in │
│ 93 │ │ │ │ dataset = dataset[self.config.field] │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │
│ │
│ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │
│ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │
│ 292 │ """ │
│ ❱ 293 │ return loads(fp.read(), │
│ 294 │ │ cls=cls, object_hook=object_hook, │
│ 295 │ │ parse_float=parse_float, parse_int=parse_int, │
│ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │
│ │
│ 354 │ if (cls is None and object_hook is None and │
│ 355 │ │ │ parse_int is None and parse_float is None and │
│ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │
│ ❱ 357 │ │ return _default_decoder.decode(s) │
│ 358 │ if cls is None: │
│ 359 │ │ cls = JSONDecoder │
│ 360 │ if object_hook is not None: │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │
│ │
│ 334 │ │ containing a JSON document). │
│ 335 │ │ │
│ 336 │ │ """ │
│ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │
│ 338 │ │ end = _w(s, end).end() │
│ 339 │ │ if end != len(s): │
│ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │
│ │
│ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │
│ │
│ 350 │ │ │
│ 351 │ │ """ │
│ 352 │ │ try: │
│ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │
│ 354 │ │ except StopIteration as err: │
│ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │
│ 356 │ │ return obj, end │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051)
```
However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well.
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
```
|
closed
|
https://github.com/huggingface/datasets/issues/4306
| 2022-05-10T13:14:04
| 2022-05-10T18:58:36
| 2022-05-10T18:58:09
|
{
"login": "whatever60",
"id": 57242693,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,231,099,934
| 4,305
|
Fixes FrugalScore
|
There are two minor modifications in this PR:
1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.
2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore.
@lhoestq
|
open
|
https://github.com/huggingface/datasets/pull/4305
| 2022-05-10T12:44:06
| 2022-09-22T16:42:06
| null |
{
"login": "moussaKam",
"id": 28675016,
"type": "User"
}
|
[
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true
|
[] |
1,231,047,051
| 4,304
|
Language code search does direct matches
|
## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search.
## Steps to reproduce the bug
1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL))
2. Look for datasets using the full code
3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq))
Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`.
One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :)
## Expected results
Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`).
## Actual results
The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches.
## Environment info
(web app)
|
open
|
https://github.com/huggingface/datasets/issues/4304
| 2022-05-10T11:59:16
| 2022-05-10T12:38:42
| null |
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,230,867,728
| 4,303
|
Fix: Add missing comma
| null |
closed
|
https://github.com/huggingface/datasets/pull/4303
| 2022-05-10T09:21:38
| 2022-05-11T08:50:15
| 2022-05-11T08:50:14
|
{
"login": "mrm8488",
"id": 3653789,
"type": "User"
}
|
[] | true
|
[] |
1,230,651,117
| 4,302
|
Remove hacking license tags when mirroring datasets on the Hub
|
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub.
I guess this hacking is no longer necessary:
- it is not applied to community datasets
- all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones
Fix #4298.
|
closed
|
https://github.com/huggingface/datasets/pull/4302
| 2022-05-10T05:52:46
| 2022-05-20T09:48:30
| 2022-05-20T09:40:20
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,230,401,256
| 4,301
|
Add ImageNet-Sketch dataset
|
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
|
closed
|
https://github.com/huggingface/datasets/pull/4301
| 2022-05-09T23:38:45
| 2022-05-23T18:14:14
| 2022-05-23T18:05:29
|
{
"login": "nateraw",
"id": 32437151,
"type": "User"
}
|
[] | true
|
[] |
1,230,272,761
| 4,300
|
Add API code examples for loading methods
|
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me:
```py
from datasets import inspect_dataset
inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
```
Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?
|
closed
|
https://github.com/huggingface/datasets/pull/4300
| 2022-05-09T21:30:26
| 2022-05-25T16:23:15
| 2022-05-25T09:20:13
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "documentation",
"color": "0075ca"
}
] | true
|
[] |
1,230,236,782
| 4,299
|
Remove manual download from imagenet-1k
|
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/4299
| 2022-05-09T20:49:18
| 2022-05-25T14:54:59
| 2022-05-25T14:46:16
|
{
"login": "mariosasko",
"id": 47462742,
"type": "User"
}
|
[] | true
|
[] |
1,229,748,006
| 4,298
|
Normalise license names
|
**Is your feature request related to a problem? Please describe.**
When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata.
**Describe the solution you'd like**
I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) .
**Describe alternatives you've considered**
None
**Additional context**
None
**Priority**
Low
|
closed
|
https://github.com/huggingface/datasets/issues/4298
| 2022-05-09T13:51:32
| 2022-05-20T09:51:50
| 2022-05-20T09:51:50
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
1,229,735,498
| 4,297
|
Datasets YAML tagging space is down
|
## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
There's an error message; here's the step where it breaks:
```
Step 18/29 : RUN pip install -r requirements.txt
---> Running in e88bfe7e7e0c
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4))
Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k
WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref.
Running command git checkout -q update-task-list
error: pathspec 'update-task-list' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
```
## Environment info
- Platform: Linux / Brave
|
closed
|
https://github.com/huggingface/datasets/issues/4297
| 2022-05-09T13:45:05
| 2022-05-09T14:44:25
| 2022-05-09T14:44:25
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
1,229,554,645
| 4,296
|
Fix URL query parameters in compression hop path when streaming
|
Fix #3488.
|
open
|
https://github.com/huggingface/datasets/pull/4296
| 2022-05-09T11:18:22
| 2022-07-06T15:19:53
| null |
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,229,527,283
| 4,295
|
Fix missing lz4 dependency for tests
|
Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.
|
closed
|
https://github.com/huggingface/datasets/pull/4295
| 2022-05-09T10:53:20
| 2022-05-09T11:21:22
| 2022-05-09T11:13:44
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,229,455,582
| 4,294
|
Fix CLI run_beam save_infos
|
Currently, it raises TypeError:
```
TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'
```
|
closed
|
https://github.com/huggingface/datasets/pull/4294
| 2022-05-09T09:47:43
| 2022-05-10T07:04:04
| 2022-05-10T06:56:10
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
1,228,815,477
| 4,293
|
Fix wrong map parameter name in cache docs
|
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
|
closed
|
https://github.com/huggingface/datasets/pull/4293
| 2022-05-08T07:27:46
| 2022-06-14T16:49:00
| 2022-06-14T16:07:00
|
{
"login": "h4iku",
"id": 3812788,
"type": "User"
}
|
[] | true
|
[] |
1,228,216,788
| 4,292
|
Add API code examples for remaining main classes
|
This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)
|
closed
|
https://github.com/huggingface/datasets/pull/4292
| 2022-05-06T18:15:31
| 2022-05-25T18:05:13
| 2022-05-25T17:56:36
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[
{
"name": "documentation",
"color": "0075ca"
}
] | true
|
[] |
1,227,777,500
| 4,291
|
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
|
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes
|
closed
|
https://github.com/huggingface/datasets/issues/4291
| 2022-05-06T12:03:27
| 2022-05-09T08:25:58
| 2022-05-09T08:25:58
|
{
"login": "leondz",
"id": 121934,
"type": "User"
}
|
[
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.