url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
1.69B
| node_id
stringlengths 18
24
| number
int64 2
5.8k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 0
30
| created_at
int64 1,587B
1,683B
| updated_at
int64 1,588B
1,683B
| closed_at
int64 1,588B
1,683B
⌀ | author_association
stringclasses 3
values | draft
float64 | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4696
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4696/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4696/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4696/events
|
https://github.com/huggingface/datasets/issues/4696
| 1,307,183,099
|
I_kwDODunzps5N6gf7
| 4,696
|
Cannot load LinCE dataset
|
{
"login": "finiteautomata",
"id": 167943,
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finiteautomata",
"html_url": "https://github.com/finiteautomata",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @finiteautomata, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"lince\", \"ner_spaeng\")\r\nDownloading builder script: 20.8kB [00:00, 9.09MB/s] \r\nDownloading metadata: 31.2kB [00:00, 13.5MB/s] \r\nDownloading and preparing dataset lince/ner_spaeng (download: 2.93 MiB, generated: 18.45 MiB, post-processed: Unknown size, total: 21.38 MiB) to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.08M/3.08M [00:01<00:00, 2.73MB/s]\r\nDataset lince downloaded and prepared to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 630.66it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 33611\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 10085\r\n })\r\n test: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 23527\r\n })\r\n})\r\n``` \r\n\r\nPlease note that for this dataset, the original data files are not hosted on the Hugging Face Hub, but on https://ritual.uh.edu\r\nAnd sometimes, the server might be temporarily unavailable, as your error message said (trying to connect to the server timed out):\r\n```\r\nConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))\")))\r\n```\r\nIn these cases you could:\r\n- either contact the owners of the data server where the data is hosted to inform them about the issue in their server\r\n- or re-try after waiting some time: usually these issues are just temporary",
"Great, thanks for checking out!"
] | 1,658,084,514,000
| 1,658,136,040,000
| 1,658,129,062,000
|
NONE
| null | null |
## Describe the bug
Cannot load LinCE dataset due to a connection error
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lince", "ner_spaeng")
```
A notebook with this code and corresponding error can be found at https://colab.research.google.com/drive/1pgX3bNB9amuUwAVfPFm-XuMV5fEg-cD2
## Expected results
It should load the dataset
## Actual results
```python
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-2-fc551ddcebef> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("lince", "ner_spaeng")
10 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
-> 1684 use_auth_token=use_auth_token,
1685 )
1686
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1219
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1222
1223 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
772
773 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/lince/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589/lince.py in _split_generators(self, dl_manager)
481 def _split_generators(self, dl_manager):
482 """Returns SplitGenerators."""
--> 483 lince_dir = dl_manager.download_and_extract(f"{_LINCE_URL}/{self.config.name}.zip")
484 data_dir = os.path.join(lince_dir, self.config.data_dir)
485 return [
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
432
433 def get_recorded_sizes_checksums(self):
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download(self, url_or_urls)
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
--> 315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
346 # Singleton
347 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 348 return function(data_struct)
349
350 disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled()
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
336
337 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
--> 197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
--> 533 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
534 elif response is not None:
535 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))")))
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4696/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4694
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4694/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4694/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4694/events
|
https://github.com/huggingface/datasets/issues/4694
| 1,306,958,380
|
I_kwDODunzps5N5pos
| 4,694
|
Distributed data parallel training for streaming datasets
|
{
"login": "cyk1337",
"id": 13767887,
"node_id": "MDQ6VXNlcjEzNzY3ODg3",
"avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyk1337",
"html_url": "https://github.com/cyk1337",
"followers_url": "https://api.github.com/users/cyk1337/followers",
"following_url": "https://api.github.com/users/cyk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions",
"organizations_url": "https://api.github.com/users/cyk1337/orgs",
"repos_url": "https://api.github.com/users/cyk1337/repos",
"events_url": "https://api.github.com/users/cyk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyk1337/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] |
[
"Hi ! According to https://huggingface.co/docs/datasets/use_with_pytorch#stream-data you can use the pytorch DataLoader with `num_workers>0` to distribute the shards across your workers (it uses `torch.utils.data.get_worker_info()` to get the worker ID and select the right subsets of shards to use)\r\n\r\n<s> EDIT: here is a code example </s>\r\n```python\r\n# ds = ds.with_format(\"torch\")\r\n# dataloader = DataLoader(ds, num_workers=num_workers)\r\n```\r\n\r\nEDIT: `with_format(\"torch\")` is not required, now you can just do\r\n```python\r\ndataloader = DataLoader(ds, num_workers=num_workers)\r\n```",
"@cyk1337 does streaming datasets with multi-gpu works for you? I am testing on one node with multiple gpus, but this is freezing, https://github.com/huggingface/datasets/issues/5123 \r\nIn case you could make this work, could you share with me your data-loading codes?\r\nthank you",
"+1",
"This has been implemented in `datasets` 2.8:\r\n```python\r\nfrom datasets.distributed import split_dataset_by_node\r\n\r\nds = split_dataset_by_node(ds, rank=rank, world_size=world_size)\r\n```\r\n\r\ndocs: https://huggingface.co/docs/datasets/use_with_pytorch#distributed",
"i'm having hanging issues with this when using DDP and allocating the datasets with `split_dataset_by_node` 🤔\r\n\r\n--- \r\n### edit\r\nI don't want to pollute this thread, but for the sake of following up, I observed hanging close to the final iteration of the dataloader. I think this was happening on the final shard. First, I removed the final shard and things worked. Then (including all shards), I reordered the list of shards: `load_dataset('json', data_files=reordered, streaming=True)` and no hang. \r\n\r\nI won't open an issue yet bc I am not quite sure about this observation.",
"@wconnell would you mind opening a different bug issue and giving more details?\r\nhttps://github.com/huggingface/datasets/issues/new?assignees=&labels=&template=bug-report.yml\r\n\r\nThanks."
] | 1,658,021,383,000
| 1,682,533,269,000
| null |
NONE
| null | null |
### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4694/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4694/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4692
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4692/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4692/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4692/events
|
https://github.com/huggingface/datasets/issues/4692
| 1,306,609,680
|
I_kwDODunzps5N4UgQ
| 4,692
|
Unable to cast a column with `Image()` by using the `cast_column()` feature
|
{
"login": "skrishnan99",
"id": 28833916,
"node_id": "MDQ6VXNlcjI4ODMzOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28833916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skrishnan99",
"html_url": "https://github.com/skrishnan99",
"followers_url": "https://api.github.com/users/skrishnan99/followers",
"following_url": "https://api.github.com/users/skrishnan99/following{/other_user}",
"gists_url": "https://api.github.com/users/skrishnan99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skrishnan99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skrishnan99/subscriptions",
"organizations_url": "https://api.github.com/users/skrishnan99/orgs",
"repos_url": "https://api.github.com/users/skrishnan99/repos",
"events_url": "https://api.github.com/users/skrishnan99/events{/privacy}",
"received_events_url": "https://api.github.com/users/skrishnan99/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi, thanks for reporting! A PR (https://github.com/huggingface/datasets/pull/4614) has already been opened to address this issue."
] | 1,657,925,763,000
| 1,658,237,784,000
| 1,658,237,784,000
|
NONE
| null | null |
## Describe the bug
A clear and concise description of what the bug is.
When I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I get the following error - ``` TypeError: Couldn't cast array of type
string
to
{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
When I try and cast the same column, but without doing the `add_column` in the previous step, it works as expected.
## Steps to reproduce the bug
```python
from datasets import Dataset, Image
data_dict = {
"img_path": ["https://picsum.photos/200/300"]
}
dataset = Dataset.from_dict(data_dict)
#NOTE Comment out this line and use cast_column and it works properly
dataset = dataset.add_column("yeet", [1])
#NOTE This line fails to execute properly if `add_column` is called before
dataset = dataset.cast_column("img_path", Image())
# #NOTE This is my current workaround. This seems to work fine with/without `add_column`. While
# # running this, make sure to comment out the `cast_column` line
# new_features = dataset.features.copy()
# new_features["img_path"] = Image()
# dataset = dataset.cast(new_features)
print(dataset)
print(dataset.features)
print(dataset[0])
```
## Expected results
A clear and concise description of the expected results.
Able to successfully use `cast_column` to cast a column containing img_paths to now be Image() features after modifying the dataset using `add_column` in a previous step
## Actual results
Specify the actual results or traceback.
```
Traceback (most recent call last):
File "/home/surya/Desktop/hf_bug_test.py", line 14, in <module>
dataset = dataset.cast_column("img_path", Image())
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1580, in cast_column
dataset._data = dataset._data.cast(dataset.features.arrow_schema)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1487, in cast
new_tables.append(subtable.cast(subschema, *args, **kwargs))
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 834, in cast
return InMemoryTable(table_cast(self.table, *args, **kwargs))
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1897, in table_cast
return cast_table_to_schema(table, schema)
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1846, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
string
to
{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.9.7
- PyArrow version: 7.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4692/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4691
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4691/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4691/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4691/events
|
https://github.com/huggingface/datasets/issues/4691
| 1,306,389,656
|
I_kwDODunzps5N3eyY
| 4,691
|
Dataset Viewer issue for rajistics/indian_food_images
|
{
"login": "rajshah4",
"id": 6808012,
"node_id": "MDQ6VXNlcjY4MDgwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6808012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajshah4",
"html_url": "https://github.com/rajshah4",
"followers_url": "https://api.github.com/users/rajshah4/followers",
"following_url": "https://api.github.com/users/rajshah4/following{/other_user}",
"gists_url": "https://api.github.com/users/rajshah4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajshah4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajshah4/subscriptions",
"organizations_url": "https://api.github.com/users/rajshah4/orgs",
"repos_url": "https://api.github.com/users/rajshah4/repos",
"events_url": "https://api.github.com/users/rajshah4/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajshah4/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, thanks for reporting. I triggered a refresh of the preview for this dataset, and it works now. I'm not sure what occurred.\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-07-18 à 11 01 52\" src=\"https://user-images.githubusercontent.com/1676121/179541327-f62ecd5e-a18a-4d91-b316-9e2ebde77a28.png\">\r\n\r\n"
] | 1,657,911,795,000
| 1,658,156,523,000
| 1,658,156,523,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/rajistics/indian_food_images/viewer/rajistics--indian_food_images/train
### Description
I have a train/test split in my dataset
<img width="410" alt="Screen Shot 2022-07-15 at 11 44 42 AM" src="https://user-images.githubusercontent.com/6808012/179293215-7b419ec3-3527-46f2-8dad-adbc5568cfa0.png">
t
The dataset viewer works for the test split (images of indian food), but does not show my train split. My guess is maybe there is some corrupt image file that is guessing this. But I have no idea.
The original dataset was pulled from here: https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification?resource=download-directory
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4691/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4684
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4684/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4684/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4684/events
|
https://github.com/huggingface/datasets/issues/4684
| 1,305,554,654
|
I_kwDODunzps5N0S7e
| 4,684
|
How to assign new values to Dataset?
|
{
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] |
[
"Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {\"label\": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns(\"label\").add_column(\"label\", [0]*len(data)).cast(dset.features)`, but note that this approach creates an in-memory table for the added column instead of writing to disk, which could be problematic for large datasets.",
"Hi! I tried your proposed solution, but it does not solve my problem unfortunately. I am working with a set of protein sequences that have been tokenized with ESM, but some sequences are longer than `max_length`, they have been truncated in the tokenization. So now I want to truncate my labels as well, but that does not work with a mapping (e.g. `dset.map` as you suggested). Specifically, what I did was the following:\r\n\r\n```\r\ndef postprocess_tokenize(tokenized_data):\r\n \"\"\"\r\n adjust label lengths if they dont match.\r\n \"\"\"\r\n if len(tokenized_data['input_ids']) < len(tokenized_data['labels']):\r\n new_labels = tokenized_data['labels'][:len(tokenized_data['input_ids'])]\r\n tokenized_data[\"labels\"] = new_labels\r\n return tokenized_data\r\n\r\ntokenized_data = tokenized_data.map(postprocess_tokenize, batched=True) # this does not adjust the labels...\r\n```\r\n\r\nAny tips on how to do this properly?\r\n\r\nMore generally, I am wondering why the DataCollator supports padding but does not support truncation? Seems odd to me.\r\n\r\nThanks in advance!"
] | 1,657,858,677,000
| 1,679,327,441,000
| 1,665,402,818,000
|
NONE
| null | null |

Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import load_dataset
data = load_dataset('glue','sst2')
data['train']['label'] = [0]*len(data)
```
I will get the error:
```
TypeError: 'Dataset' object does not support item assignment
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4684/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4682
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4682/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4682/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4682/events
|
https://github.com/huggingface/datasets/issues/4682
| 1,304,788,215
|
I_kwDODunzps5NxXz3
| 4,682
|
weird issue/bug with columns (dataset iterable/stream mode)
|
{
"login": "eunseojo",
"id": 12104720,
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eunseojo",
"html_url": "https://github.com/eunseojo",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] |
[] | 1,657,805,207,000
| 1,657,805,207,000
| null |
CONTRIBUTOR
| null | null |
I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4682/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4681/events
|
https://github.com/huggingface/datasets/issues/4681
| 1,304,617,484
|
I_kwDODunzps5NwuIM
| 4,681
|
IndexError when loading ImageFolder
|
{
"login": "johko",
"id": 2843485,
"node_id": "MDQ6VXNlcjI4NDM0ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2843485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johko",
"html_url": "https://github.com/johko",
"followers_url": "https://api.github.com/users/johko/followers",
"following_url": "https://api.github.com/users/johko/following{/other_user}",
"gists_url": "https://api.github.com/users/johko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johko/subscriptions",
"organizations_url": "https://api.github.com/users/johko/orgs",
"repos_url": "https://api.github.com/users/johko/repos",
"events_url": "https://api.github.com/users/johko/events{/privacy}",
"received_events_url": "https://api.github.com/users/johko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/arrow_writer.py#L387) as `asdict(info)` calls `Features({..., \"label\": {'num_classes': 0, 'names': [], 'id': None, '_type': 'ClassLabel'}})`, which then calls `require_decoding` [here](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/features/features.py#L1516) on the dict value it does not expect.\r\n\r\nI see two ways to fix this:\r\n* custom `asdict` where `dict_factory` is also applied on the `dict` object itself besides dataclasses (the built-in implementation calls `type(dict_obj)` - this means we also need to fix `Features.to_dict` btw) \r\n* implement `DatasetInfo.to_dict` (though adding `to_dict` to a data class is a bit weird IMO)\r\n\r\n@lhoestq Which one of these approaches do you like more?\r\n",
"Small pref for the first option, it feels weird to know that `Features()` can be called with a dictionary of types defined as dictionaries instead of type instances."
] | 1,657,796,275,000
| 1,658,752,674,000
| 1,658,752,674,000
|
NONE
| null | null |
## Describe the bug
Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).
## Steps to reproduce the bug
Put a csv file in a folder with images and load it:
```python
import datasets
datasets.load_dataset("imagefolder", data_dir=path/to/folder)
```
## Expected results
I would expect a better error message, like `Unsupported file` or even the dataset loader just ignoring every file that is not an image in that case.
## Actual results
Here is the whole traceback:
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.11.0-051100-generic-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4681/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4680
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4680/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4680/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4680/events
|
https://github.com/huggingface/datasets/issues/4680
| 1,304,534,770
|
I_kwDODunzps5NwZ7y
| 4,680
|
Dataset Viewer issue for codeparrot/xlcost-text-to-code
|
{
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There seems to be an issue with the `C++-snippet-level` config:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"codeparrot/xlcost-text-to-code\", \"C++-snippet-level\")\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 352, in get_dataset_config_info\r\n info.splits = {\r\nTypeError: 'NoneType' object is not iterable\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nI remove the dataset-viewer tag since it's not directly related.\r\n\r\nPinging @huggingface/datasets ",
"Thanks I found that this subset wasn't properly defined the the config, I fixed it. Now I can see the subsets but I get this error for the viewer\r\n````\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split cache is empty.\r\n```",
"Yes, the cache is being refreshed, hopefully, it will work in some minutes for all the splits. Some are already here:\r\n\r\nhttps://huggingface.co/datasets/codeparrot/xlcost-text-to-code/viewer/Python-snippet-level/train\r\n\r\n<img width=\"1533\" alt=\"Capture d’écran 2022-07-18 à 12 04 06\" src=\"https://user-images.githubusercontent.com/1676121/179553933-64d874fa-ada9-4b82-900e-082619523c20.png\">\r\n",
"I think all the splits are working as expected now",
"Perfect, thank you!"
] | 1,657,791,950,000
| 1,658,162,220,000
| 1,658,160,276,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/codeparrot/xlcost-text-to-code
### Description
Error
```
Server Error
Status code: 400
Exception: TypeError
Message: 'NoneType' object is not iterable
```
Before I did a minor change in the dataset script (removing some comments), the viewer was working but not properely, it wasn't showing the dataset subsets. But the data can be loaded successfully.
Thanks!
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4680/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4678/events
|
https://github.com/huggingface/datasets/issues/4678
| 1,303,741,432
|
I_kwDODunzps5NtYP4
| 4,678
|
Cant pass streaming dataset to dataloader after take()
|
{
"login": "zankner",
"id": 39166683,
"node_id": "MDQ6VXNlcjM5MTY2Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/39166683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zankner",
"html_url": "https://github.com/zankner",
"followers_url": "https://api.github.com/users/zankner/followers",
"following_url": "https://api.github.com/users/zankner/following{/other_user}",
"gists_url": "https://api.github.com/users/zankner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zankner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zankner/subscriptions",
"organizations_url": "https://api.github.com/users/zankner/orgs",
"repos_url": "https://api.github.com/users/zankner/repos",
"events_url": "https://api.github.com/users/zankner/events{/privacy}",
"received_events_url": "https://api.github.com/users/zankner/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Hi! Calling `take` on an iterable/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the first `50_000/batch_size` batches in the loop."
] | 1,657,733,658,000
| 1,657,804,041,000
| null |
NONE
| null | null |
## Describe the bug
I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take.
## Steps to reproduce the bug
```python
import datasets
import torch
dset = datasets.load_dataset(path='c4', name='en', split="train", streaming=True)
dset = dset.take(50_000)
dset = dset.with_format("torch")
num_workers = 8
batch_size = 512
loader = torch.utils.data.DataLoader(dataset=dset,
batch_size=batch_size,
num_workers=num_workers)
for batch in loader:
...
```
## Expected results
No error thrown when iterating over the dataloader
## Actual results
Original Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/root/.local/lib/python3.9/site-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py", line 48, in __iter__
for key, example in self._iter_shard(shard_idx):
File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 586, in _iter_shard
yield from ex_iterable.shard_data_sources(shard_idx)
File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 60, in shard_data_sources
raise NotImplementedError(f"{type(self)} doesn't implement shard_data_sources yet")
NotImplementedError: <class 'datasets.iterable_dataset.TakeExamplesIterable'> doesn't implement shard_data_sources yet
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4678/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/4678/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4677
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4677/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4677/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4677/events
|
https://github.com/huggingface/datasets/issues/4677
| 1,302,258,440
|
I_kwDODunzps5NnuMI
| 4,677
|
Random 400 Client Error when pushing dataset
|
{
"login": "msis",
"id": 577139,
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msis",
"html_url": "https://github.com/msis",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"organizations_url": "https://api.github.com/users/msis/orgs",
"repos_url": "https://api.github.com/users/msis/repos",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"received_events_url": "https://api.github.com/users/msis/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"did you ever fix this? I'm experiencing the same",
"I am having the same issue. Even the simple example from the documentation gives me the 400 Error\r\n\r\n\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"stevhliu/demo\")\r\n> dataset.push_to_hub(\"processed_demo\")\r\n\r\n\r\n`requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/REDACTED/commit/main (Request ID: e-tPnYTiCdB5KPmSL86dQ)`\r\n\r\nI \"fixed\" it by initializing a new virtual environment with only datasets==2.5.2 installed.\r\n\r\nThe workaround consists of saving to disk then loading from disk and pushing to hub but from the new clean virtual environment."
] | 1,657,641,404,000
| 1,675,778,050,000
| 1,675,778,050,000
|
NONE
| null | null |
## Describe the bug
When pushing a dataset, the client errors randomly with `Bad Request for url:...`.
At the next call, a new parquet file is created for each shard.
The client may fail at any random shard.
## Steps to reproduce the bug
```python
dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
```
## Expected results
Push all the dataset to the Hub with no duplicates.
If it fails, it should retry or fail, but continue from the last failed shard.
## Actual results
```
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
testing.ipynb Cell 29 in <cell line: 1>()
----> [1](testing.ipynb?line=0) dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4297, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, max_shard_size, shard_size, embed_external_files)
4291 warnings.warn(
4292 "'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.",
4293 FutureWarning,
4294 )
4295 max_shard_size = shard_size
-> 4297 repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
4298 repo_id=repo_id,
4299 split=split,
4300 private=private,
4301 token=token,
4302 branch=branch,
4303 max_shard_size=max_shard_size,
4304 embed_external_files=embed_external_files,
4305 )
4306 organization, dataset_name = repo_id.split("/")
4307 info_to_dump = self.info.copy()
File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4195, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4193 shard.to_parquet(buffer)
4194 uploaded_size += buffer.tell()
-> 4195 _retry(
4196 api.upload_file,
4197 func_kwargs=dict(
4198 path_or_fileobj=buffer.getvalue(),
4199 path_in_repo=shard_path_in_repo,
4200 repo_id=repo_id,
4201 token=token,
4202 repo_type="dataset",
4203 revision=branch,
4204 identical_ok=False,
4205 ),
4206 exceptions=HTTPError,
4207 status_codes=[504],
4208 base_wait_time=2.0,
4209 max_retries=5,
4210 max_wait_time=20.0,
4211 )
4212 shards_path_in_repo.append(shard_path_in_repo)
4214 # Cleanup to remove unused files
File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:284, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
282 except exceptions as err:
283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
--> 284 raise err
285 else:
286 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff
File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:281, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
279 while True:
280 try:
--> 281 return func(*func_args, **func_kwargs)
282 except exceptions as err:
283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1967, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, identical_ok, commit_message, commit_description, create_pr)
1957 commit_message = (
1958 commit_message
1959 if commit_message is not None
1960 else f"Upload {path_in_repo} with huggingface_hub"
1961 )
1962 operation = CommitOperationAdd(
1963 path_or_fileobj=path_or_fileobj,
1964 path_in_repo=path_in_repo,
1965 )
-> 1967 pr_url = self.create_commit(
1968 repo_id=repo_id,
1969 repo_type=repo_type,
1970 operations=[operation],
1971 commit_message=commit_message,
1972 commit_description=commit_description,
1973 token=token,
1974 revision=revision,
1975 create_pr=create_pr,
1976 )
1977 if pr_url is not None:
1978 re_match = re.match(REGEX_DISCUSSION_URL, pr_url)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1844, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads)
1836 commit_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/commit/{revision}"
1838 commit_resp = requests.post(
1839 url=commit_url,
1840 headers={"Authorization": f"Bearer {token}"},
1841 json=commit_payload,
1842 params={"create_pr": 1} if create_pr else None,
1843 )
-> 1844 _raise_for_status(commit_resp)
1845 return commit_resp.json().get("pullRequestUrl", None)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:84, in _raise_for_status(request)
76 if request.status_code == 401:
77 # The repo was not found and the user is not Authenticated
78 raise RepositoryNotFoundError(
79 f"401 Client Error: Repository Not Found for url: {request.url}. If the"
80 " repo is private, make sure you are authenticated. (Request ID:"
81 f" {request_id})"
82 )
---> 84 _raise_with_request_id(request)
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:95, in _raise_with_request_id(request)
92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):
93 e.args = (e.args[0] + f" (Request ID: {request_id})",) + e.args[1:]
---> 95 raise e
File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:90, in _raise_with_request_id(request)
88 request_id = request.headers.get("X-Request-Id")
89 try:
---> 90 request.raise_for_status()
91 except Exception as e:
92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str):
File ~/.local/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1016 http_error_msg = (
1017 f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018 )
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/ORG/DATASET/commit/main (Request ID: a_F0IQAHJdxGKVRYyu1cF)
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.31
- Python version: 3.9.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4677/timeline
|
not_planned
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4676
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4676/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4676/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4676/events
|
https://github.com/huggingface/datasets/issues/4676
| 1,302,202,028
|
I_kwDODunzps5Nngas
| 4,676
|
Dataset.map gets stuck on _cast_to_python_objects
|
{
"login": "srobertjames",
"id": 662612,
"node_id": "MDQ6VXNlcjY2MjYxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/662612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srobertjames",
"html_url": "https://github.com/srobertjames",
"followers_url": "https://api.github.com/users/srobertjames/followers",
"following_url": "https://api.github.com/users/srobertjames/following{/other_user}",
"gists_url": "https://api.github.com/users/srobertjames/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srobertjames/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srobertjames/subscriptions",
"organizations_url": "https://api.github.com/users/srobertjames/orgs",
"repos_url": "https://api.github.com/users/srobertjames/repos",
"events_url": "https://api.github.com/users/srobertjames/events{/privacy}",
"received_events_url": "https://api.github.com/users/srobertjames/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
|
{
"login": "szmoro",
"id": 5697926,
"node_id": "MDQ6VXNlcjU2OTc5MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5697926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szmoro",
"html_url": "https://github.com/szmoro",
"followers_url": "https://api.github.com/users/szmoro/followers",
"following_url": "https://api.github.com/users/szmoro/following{/other_user}",
"gists_url": "https://api.github.com/users/szmoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szmoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szmoro/subscriptions",
"organizations_url": "https://api.github.com/users/szmoro/orgs",
"repos_url": "https://api.github.com/users/szmoro/repos",
"events_url": "https://api.github.com/users/szmoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/szmoro/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "szmoro",
"id": 5697926,
"node_id": "MDQ6VXNlcjU2OTc5MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5697926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szmoro",
"html_url": "https://github.com/szmoro",
"followers_url": "https://api.github.com/users/szmoro/followers",
"following_url": "https://api.github.com/users/szmoro/following{/other_user}",
"gists_url": "https://api.github.com/users/szmoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szmoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szmoro/subscriptions",
"organizations_url": "https://api.github.com/users/szmoro/orgs",
"repos_url": "https://api.github.com/users/szmoro/repos",
"events_url": "https://api.github.com/users/szmoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/szmoro/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Are you able to reproduce this? My example is small enough that it should be easy to try.",
"Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the underlying storage format used by `datasets`). This cast is not needed on NumPy arrays as PyArrow supports them natively, so one way to make this transform faster is to add `return_tensors=\"np\"` to the tokenizer call. \r\n\r\nI think we should mention this in the docs (cc @stevhliu)",
"I tested this tokenize function and indeed noticed a casting. However it seems to only concerns the `offset_mapping` field, which contains a list of tuples, that is converted to a list of lists. Since `pyarrow` also supports tuples, we actually don't need to convert the tuples to lists. \r\n\r\nI think this can be changed here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L382-L383\r\n\r\n```diff\r\n- if isinstance(obj, list): \r\n+ if isinstance(obj, (list, tuple)): \r\n```\r\n\r\nand here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L386-L387\r\n\r\n```diff\r\n- return obj if isinstance(obj, list) else [], isinstance(obj, tuple)\r\n+ return obj, False\r\n```\r\n\r\n@srobertjames can you try applying these changes and let us know if it helps ? If so, feel free to open a Pull Request to contribute this improvement if you want :)",
"Wow, adding `return_tensors=\"np\"` sped up my example by a **factor 17x** of and completely eliminated the casting! I'd recommend not only to document it, but to make that the default.\r\n\r\nThe code at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb does not specify `return_tensors=\"np\"` but yet avoids the casting penalty. How does it do that? (The ntbk seems to do `return_overflowing_tokens=True, return_offsets_mapping=True,`).\r\n\r\nAlso, surprisingly enough, using `return_tensors=\"pt\"` (which is my eventual application) yields this error:\r\n```\r\nTypeError: Provided `function` which is applied to all elements of table returns a `dict` of types \r\n[<class 'torch.Tensor'>, <class 'torch.Tensor'>, <class 'torch.Tensor'>, <class 'torch.Tensor'>]. \r\nWhen using `batched=True`, make sure provided `function` returns a `dict` of types like \r\n`(<class 'list'>, <class 'numpy.ndarray'>)`.\r\n```",
"Setting the output to `\"np\"` makes the whole pipeline fast because it moves the data buffers from rust to python to arrow using zero-copy, and also because it does eliminate the casting completely ;)\r\n\r\nHave you had a chance to try eliminating the tuple casting using the trick above ?",
"@lhoestq I just benchmarked the two edits to `features.py` above, and they appear to solve the problem, bringing my original example to within 20% the speed of the output `\"np\"` example. Nice!\r\n\r\nFor a pull request, do you suggest simply following https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md ?",
"Cool ! Sure feel free to follow these instructions to open a PR :) thanks !",
"#take",
"Resolved via https://github.com/huggingface/datasets/pull/4993."
] | 1,657,638,598,000
| 1,664,802,064,000
| 1,664,802,063,000
|
NONE
| null | null |
## Describe the bug
`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.
Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb , and it did _not_ have this problem. However, I'm at a loss to figure out how it avoids it, as the example below is simple and minimal and still has this problem.
This casting, where it occurs, causes the `Dataset.map` to run approximately 7x slower than it runs for code which does not cause this casting.
This may be related to https://github.com/huggingface/datasets/issues/1046 . However, the tokenizer is _not_ set to return Tensors.
## Steps to reproduce the bug
A minimal, self-contained example to reproduce is below:
```python
import transformers
from transformers import AutoTokenizer
from datasets import load_dataset
import torch
import cProfile
pretrained = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
squad = load_dataset('squad')
squad_train = squad['train']
squad_tiny = squad_train.select(range(5000))
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
def tokenize(ds):
tokens = tokenizer(text=ds['question'],
text_pair=ds['context'],
add_special_tokens=True,
padding='max_length',
truncation='only_second',
max_length=160,
stride=32,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
return tokens
cmd = 'squad_tiny.map(tokenize, batched=True, remove_columns=squad_tiny.column_names)'
cProfile.run(cmd, sort='tottime')
```
## Actual results
The code works, but takes 10-25 sec per batch (about 7x slower than non-casting code), with the following profile. Note that `_cast_to_python_objects` is the culprit.
```
63524075 function calls (58206482 primitive calls) in 121.836 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
5274034/40 68.751 0.000 111.060 2.776 features.py:262(_cast_to_python_objects)
42223832 24.077 0.000 33.310 0.000 {built-in method builtins.isinstance}
16338/20 5.121 0.000 111.053 5.553 features.py:361(<listcomp>)
5274135 4.747 0.000 4.749 0.000 {built-in method _abc._abc_instancecheck}
80/40 4.731 0.059 116.292 2.907 {pyarrow.lib.array}
5274135 4.485 0.000 9.234 0.000 abc.py:96(__instancecheck__)
2661564/2645196 2.959 0.000 4.298 0.000 features.py:1081(_check_non_null_non_empty_recursive)
5 2.786 0.557 2.786 0.557 {method 'encode_batch' of 'tokenizers.Tokenizer' objects}
2668052 0.930 0.000 0.930 0.000 {built-in method builtins.len}
5000 0.930 0.000 0.938 0.000 tokenization_utils_fast.py:187(_convert_encoding)
5 0.750 0.150 0.808 0.162 {method 'to_pydict' of 'pyarrow.lib.Table' objects}
1 0.444 0.444 121.749 121.749 arrow_dataset.py:2501(_map_single)
40 0.375 0.009 116.291 2.907 arrow_writer.py:151(__arrow_array__)
10 0.066 0.007 0.066 0.007 {method 'write_batch' of 'pyarrow.lib._CRecordBatchWriter' objects}
1 0.060 0.060 121.835 121.835 fingerprint.py:409(wrapper)
11387/5715 0.049 0.000 0.175 0.000 {built-in method builtins.getattr}
36 0.049 0.001 0.049 0.001 {pyarrow._compute.call_function}
15000 0.040 0.000 0.040 0.000 _collections_abc.py:719(__iter__)
3 0.023 0.008 0.023 0.008 {built-in method _imp.create_dynamic}
77 0.020 0.000 0.020 0.000 {built-in method builtins.dir}
37 0.019 0.001 0.019 0.001 socket.py:543(send)
15 0.017 0.001 0.017 0.001 tokenization_utils_fast.py:460(<listcomp>)
432/421 0.015 0.000 0.024 0.000 traitlets.py:1388(_notify_observers)
5000 0.015 0.000 0.018 0.000 _collections_abc.py:672(keys)
51 0.014 0.000 0.042 0.001 traitlets.py:276(getmembers)
5 0.014 0.003 3.775 0.755 tokenization_utils_fast.py:392(_batch_encode_plus)
3/1 0.014 0.005 0.035 0.035 {built-in method _imp.exec_dynamic}
5 0.012 0.002 0.950 0.190 tokenization_utils_fast.py:438(<listcomp>)
31626 0.012 0.000 0.012 0.000 {method 'append' of 'list' objects}
1532/1001 0.011 0.000 0.189 0.000 traitlets.py:643(get)
5 0.009 0.002 3.796 0.759 arrow_dataset.py:2631(apply_function_on_filtered_inputs)
51 0.009 0.000 0.062 0.001 traitlets.py:1766(traits)
5 0.008 0.002 3.784 0.757 tokenization_utils_base.py:2632(batch_encode_plus)
368 0.007 0.000 0.044 0.000 traitlets.py:1715(_get_trait_default_generator)
26 0.007 0.000 0.022 0.001 traitlets.py:1186(setup_instance)
51 0.006 0.000 0.010 0.000 traitlets.py:1781(<listcomp>)
80/32 0.006 0.000 0.052 0.002 table.py:1758(cast_array_to_feature)
684 0.006 0.000 0.007 0.000 {method 'items' of 'dict' objects}
4344/1794 0.006 0.000 0.192 0.000 traitlets.py:675(__get__)
...
```
## Environment info
I observed this on both Google colab and my local workstation:
### Google colab
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
### Local
- `datasets` version: 2.3.2
- Platform: Windows-7-6.1.7601-SP1
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4676/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4675/events
|
https://github.com/huggingface/datasets/issues/4675
| 1,302,193,649
|
I_kwDODunzps5NneXx
| 4,675
|
Unable to use dataset with PyTorch dataloader
|
{
"login": "BlueskyFR",
"id": 25421460,
"node_id": "MDQ6VXNlcjI1NDIxNDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/25421460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlueskyFR",
"html_url": "https://github.com/BlueskyFR",
"followers_url": "https://api.github.com/users/BlueskyFR/followers",
"following_url": "https://api.github.com/users/BlueskyFR/following{/other_user}",
"gists_url": "https://api.github.com/users/BlueskyFR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlueskyFR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlueskyFR/subscriptions",
"organizations_url": "https://api.github.com/users/BlueskyFR/orgs",
"repos_url": "https://api.github.com/users/BlueskyFR/repos",
"events_url": "https://api.github.com/users/BlueskyFR/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlueskyFR/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumPy arrays \r\n2. convert NumPy arrays to Torch tensors. \r\n\r\nThe 2nd step is problematic for your case as `datasets` attempts to convert the array of dictionaries to a PyTorch tensor. One way to fix this is to use the [preprocessing logic](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/translation/run_translation.py#L440-L458) from the Transformers translation script. And on our side, I think we can replace a NumPy array of dicts with a dict of NumPy array if the feature type is `Translation`/`TranslationVariableLanguages` (one array for each language) to get the official PyTorch error message for strings in such case."
] | 1,657,638,244,000
| 1,657,808,266,000
| null |
NONE
| null | null |
## Describe the bug
When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
"para_crawl",
name="enfr",
cache_dir="/tmp/test/",
split="train",
keep_in_memory=True,
)
dataloader = DataLoader(ds.with_format("torch"), num_workers=32)
print(next(iter(dataloader)))
```
Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/
Thanks in advance for your help!
## Expected results
The code should run with no error
## Actual results
```
AttributeError: 'str' object has no attribute 'dtype'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4675/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4674
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4674/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4674/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4674/events
|
https://github.com/huggingface/datasets/issues/4674
| 1,301,294,844
|
I_kwDODunzps5NkC78
| 4,674
|
Issue loading datasets -- pyarrow.lib has no attribute
|
{
"login": "margotwagner",
"id": 39107794,
"node_id": "MDQ6VXNlcjM5MTA3Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/39107794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/margotwagner",
"html_url": "https://github.com/margotwagner",
"followers_url": "https://api.github.com/users/margotwagner/followers",
"following_url": "https://api.github.com/users/margotwagner/following{/other_user}",
"gists_url": "https://api.github.com/users/margotwagner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/margotwagner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/margotwagner/subscriptions",
"organizations_url": "https://api.github.com/users/margotwagner/orgs",
"repos_url": "https://api.github.com/users/margotwagner/repos",
"events_url": "https://api.github.com/users/margotwagner/events{/privacy}",
"received_events_url": "https://api.github.com/users/margotwagner/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi @margotwagner, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug: in an environment with datasets-2.3.2 and pyarrow-8.0.0, I can load the datasets without any problem:\r\n```python\r\n>>> ds = load_dataset(\"glue\", \"cola\")\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n8.0.0\r\n>>> from pyarrow.lib import IpcReadOptions\r\n>>> IpcReadOptions\r\npyarrow.lib.IpcReadOptions\r\n```\r\n\r\nI think you may have a problem in your Python environment: maybe you have also an old version of pyarrow that has precedence when importing it.\r\n\r\nCould you please check this (just after you tried to load the dataset and got the error)?\r\n```python\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n``` "
] | 1,657,577,444,000
| 1,677,607,615,000
| 1,677,607,615,000
|
NONE
| null | null |
## Describe the bug
I am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error:
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Steps to reproduce the bug
```python
dataset = load_dataset("glue", "cola")
```
## Expected results
Download datasets without issue.
## Actual results
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: 8.0.0
- Pandas version: 1.1.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4674/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4673/events
|
https://github.com/huggingface/datasets/issues/4673
| 1,301,010,331
|
I_kwDODunzps5Ni9eb
| 4,673
|
load_datasets on csv returns everything as a string
|
{
"login": "courtneysprouse",
"id": 25102613,
"node_id": "MDQ6VXNlcjI1MTAyNjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/25102613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/courtneysprouse",
"html_url": "https://github.com/courtneysprouse",
"followers_url": "https://api.github.com/users/courtneysprouse/followers",
"following_url": "https://api.github.com/users/courtneysprouse/following{/other_user}",
"gists_url": "https://api.github.com/users/courtneysprouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/courtneysprouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/courtneysprouse/subscriptions",
"organizations_url": "https://api.github.com/users/courtneysprouse/orgs",
"repos_url": "https://api.github.com/users/courtneysprouse/repos",
"events_url": "https://api.github.com/users/courtneysprouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/courtneysprouse/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the \"csv\" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to lacking of implementation in PyArrow. For example:\r\n```python\r\nimport datasets\r\n\r\nfeatures = datasets.Features(\r\n {\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(datasets.Value(\"int32\")),\r\n }\r\n)\r\n\r\nnew_conll = datasets.load_dataset(\"csv\", data_files=\"ner_conll.csv\", features=features)\r\n```\r\ngives `ArrowNotImplementedError` error:\r\n```\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n```\r\n\r\nOn the other hand, if you just would like to save and afterwards load your dataset, you could use `save_to_disk` and `load_from_disk` instead. These functions preserve all data types.\r\n```python\r\n>>> orig_conll.save_to_disk(\"ner_conll\")\r\n\r\n>>> from datasets import load_from_disk\r\n\r\n>>> new_conll = load_from_disk(\"ner_conll\")\r\n>>> new_conll\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n>>> new_conll[\"train\"][0]\r\n{'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'id': '0',\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.']}\r\n>>> new_conll[\"train\"].features\r\n{'chunk_tags': Sequence(feature=ClassLabel(num_classes=23, names=['O', 'B-ADJP', 'I-ADJP', 'B-ADVP', 'I-ADVP', 'B-CONJP', 'I-CONJP', 'B-INTJ', 'I-INTJ', 'B-LST', 'I-LST', 'B-NP', 'I-NP', 'B-PP', 'I-PP', 'B-PRT', 'I-PRT', 'B-SBAR', 'I-SBAR', 'B-UCP', 'I-UCP', 'B-VP', 'I-VP'], id=None), length=-1, id=None),\r\n 'id': Value(dtype='string', id=None),\r\n 'ner_tags': Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], id=None), length=-1, id=None),\r\n 'pos_tags': Sequence(feature=ClassLabel(num_classes=47, names=['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB'], id=None), length=-1, id=None),\r\n 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\n```",
"Hi @albertvillanova!\r\n\r\nThanks so much for your suggestions! That worked! "
] | 1,657,560,624,000
| 1,657,632,789,000
| 1,657,632,788,000
|
NONE
| null | null |
## Describe the bug
If you use:
`conll_dataset.to_csv("ner_conll.csv")`
It will create a csv file with all of your data as expected, however when you load it with:
`conll_dataset = load_dataset("csv", data_files="ner_conll.csv")`
everything is read in as a string. For example if I look at everything in 'ner_tags' I get back `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` instead of what I originally saved which was `[[3, 0, 7, 0, 0, 0, 7, 0, 0], [1, 2], [5, 0]]`
I think maybe there is something funky going on with the csv delimiter
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
#load original conll dataset
orig_conll = load_dataset("conll2003")
#save original conll as a csv
orig_conll.to_csv("ner_conll.csv")
#reload conll data as a csv
new_conll = load_dataset("csv", data_files="ner_conll.csv")`
```
## Expected results
A clear and concise description of the expected results.
I would expect the data be returned as the data type I saved it as. I.e. if I save a list of ints
[[3, 0, 7, 0, 0, 0, 7, 0, 0]], I shouldnt get back a string ['[3 0 7 0 0 0 7 0 0]']
I also get back a string when I pass a list of strings ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']
## Actual results
A list of strings `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']`
A string "['EU' 'rejects' 'German' 'call' 'to' 'boycott' 'British' 'lamb' '.']"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4673/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4671/events
|
https://github.com/huggingface/datasets/issues/4671
| 1,300,385,909
|
I_kwDODunzps5NglB1
| 4,671
|
Dataset Viewer issue for wmt16
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating...",
"Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...",
"Indeed, the above mentioned PR fixed the loading script (it was not working before).\r\n\r\nI'm forcing the refresh of the Viewer.",
"Please note that the above mentioned PR also made an enhancement in the `datasets` library, required by this loading script. This enhancement will only be available to the Viewer once we make our next release.",
"OK, it's working now.\r\n\r\nhttps://huggingface.co/datasets/wmt16/viewer/ro-en/test\r\n\r\n<img width=\"1434\" alt=\"Capture d’écran 2022-09-08 à 10 15 55\" src=\"https://user-images.githubusercontent.com/1676121/189071665-17d2d149-9b22-42bf-93ac-1a966c3f637a.png\">\r\n",
"Thank you @severo !!"
] | 1,657,528,451,000
| 1,663,075,622,000
| 1,662,624,966,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status code: 400
Exception: NotImplementedError
Message: This is a abstract method
```
Thanks!
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4671/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/4671/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4670
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4670/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4670/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4670/events
|
https://github.com/huggingface/datasets/issues/4670
| 1,299,984,246
|
I_kwDODunzps5NfC92
| 4,670
|
Can't extract files from `.7z` zipfile using `download_and_extract`
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n",
"Related to this issue: https://github.com/huggingface/datasets/issues/3541",
"Sure, let me look into and check what can be done. Will keep you guys updated here!",
"Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn’t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?",
"Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "
] | 1,657,477,009,000
| 1,657,890,127,000
| 1,657,890,127,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration default
Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...
Downloading data: 100%|█████████████████████████████████████████████████████████| 77.2M/77.2M [00:23<00:00, 3.28MB/s]
/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset
use_auth_token=use_auth_token,
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json'
```
just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.8
- PyArrow version: 5.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4670/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4669/events
|
https://github.com/huggingface/datasets/issues/4669
| 1,299,848,003
|
I_kwDODunzps5NehtD
| 4,669
|
loading oscar-corpus/OSCAR-2201 raises an error
|
{
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"I had to use the appropriate token for use_auth_token. Thank you."
] | 1,657,436,970,000
| 1,657,531,669,000
| 1,657,531,669,000
|
NONE
| null | null |
## Describe the bug
load_dataset('oscar-2201', 'af')
raises an error:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
I've tried other permutations such as :
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
with the same unfortunate result.
## Steps to reproduce the bug
oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True)
oscar_22 = load_dataset('oscar-2201', 'af')
oscar_22 = load_dataset('oscar-corpus/OSCAR-2201')
# Sample code to reproduce the bug
```
## Expected results
loaded data
## Actual results
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
builder_instance = load_dataset_builder(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder
dataset_module = dataset_module_factory(
File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4669/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4668
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4668/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4668/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4668/events
|
https://github.com/huggingface/datasets/issues/4668
| 1,299,735,893
|
I_kwDODunzps5NeGVV
| 4,668
|
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
|
{
"login": "hungnmai",
"id": 21364546,
"node_id": "MDQ6VXNlcjIxMzY0NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21364546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hungnmai",
"html_url": "https://github.com/hungnmai",
"followers_url": "https://api.github.com/users/hungnmai/followers",
"following_url": "https://api.github.com/users/hungnmai/following{/other_user}",
"gists_url": "https://api.github.com/users/hungnmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hungnmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hungnmai/subscriptions",
"organizations_url": "https://api.github.com/users/hungnmai/orgs",
"repos_url": "https://api.github.com/users/hungnmai/repos",
"events_url": "https://api.github.com/users/hungnmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hungnmai/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"It seems like a private dataset. The viewer is currently not supported on the private datasets."
] | 1,657,389,853,000
| 1,657,525,667,000
| 1,657,525,667,000
|
NONE
| null | null |
### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4668/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4667
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4667/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4667/events
|
https://github.com/huggingface/datasets/issues/4667
| 1,299,735,703
|
I_kwDODunzps5NeGSX
| 4,667
|
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
|
{
"login": "hungnmai",
"id": 21364546,
"node_id": "MDQ6VXNlcjIxMzY0NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21364546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hungnmai",
"html_url": "https://github.com/hungnmai",
"followers_url": "https://api.github.com/users/hungnmai/followers",
"following_url": "https://api.github.com/users/hungnmai/following{/other_user}",
"gists_url": "https://api.github.com/users/hungnmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hungnmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hungnmai/subscriptions",
"organizations_url": "https://api.github.com/users/hungnmai/orgs",
"repos_url": "https://api.github.com/users/hungnmai/repos",
"events_url": "https://api.github.com/users/hungnmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hungnmai/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,657,389,795,000
| 1,657,525,635,000
| 1,657,525,635,000
|
NONE
| null | null |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4667/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4666
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4666/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4666/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4666/events
|
https://github.com/huggingface/datasets/issues/4666
| 1,299,732,238
|
I_kwDODunzps5NeFcO
| 4,666
|
Issues with concatenating datasets
|
{
"login": "ChenghaoMou",
"id": 32014649,
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenghaoMou",
"html_url": "https://github.com/ChenghaoMou",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"}, features=squad[\"train\"].features)\r\n``` ",
"That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now."
] | 1,657,388,714,000
| 1,657,646,175,000
| 1,657,646,174,000
|
NONE
| null | null |
## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don’t want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence).
## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_dataset
squad = load_dataset("squad_v2")
squad["train"].to_json("output.jsonl", lines=True)
temp = load_dataset("json", data_files={"train": "output.jsonl"})
concatenate_datasets([temp["train"], squad["train"]])
```
## Expected results
No error executing that code
## Actual results
```
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null").
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.8.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4666/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4666/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4665
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4665/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4665/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4665/events
|
https://github.com/huggingface/datasets/issues/4665
| 1,299,652,638
|
I_kwDODunzps5NdyAe
| 4,665
|
Unable to create dataset having Python dataset script only
|
{
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions"
] | 1,657,367,146,000
| 1,657,523,409,000
| 1,657,523,401,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already):
```
datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs
```
while it errors when I remove the python script:
```
datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs
```
The error message is the following:
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4665/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4661
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4661/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4661/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4661/events
|
https://github.com/huggingface/datasets/issues/4661
| 1,298,374,944
|
I_kwDODunzps5NY6Eg
| 4,661
|
Concurrency bug when using same cache among several jobs
|
{
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"I can confirm that if I run one job first that processes the dataset, then I can run any jobs in parallel with no problem (no write-concurrency anymore...). ",
"Hi! That's weird. It seems like the error points to the `mkstemp` function, but the official docs state the following:\r\n```\r\nThere are no race conditions in the file’s creation, assuming that the platform properly implements the [os.O_EXCL](https://docs.python.org/3/library/os.html#os.O_EXCL) flag for [os.open()](https://docs.python.org/3/library/os.html#os.open)\r\n```\r\nSo this could mean your platform doesn't support that flag.\r\n\r\n~~Can you please check if wrapping the temp file creation (the line `tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)` in `_map_single`) with the `multiprocess.Lock` fixes the issue?~~\r\nPerhaps wrapping the temp file creation in `_map_single` with `filelock` could work:\r\n```python\r\nwith FileLock(lock_path):\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n```\r\nCan you please check if that helps?"
] | 1,657,245,491,000
| 1,657,905,083,000
| null |
NONE
| null | null |
## Describe the bug
I used to see this bug with an older version of the datasets. It seems to persist.
This is my concrete scenario: I launch several evaluation jobs on a cluster in which I share the file system and I share the cache directory used by huggingface libraries. The evaluation jobs read the same *.csv files. If my jobs get all scheduled pretty much at the same time, there are all kinds of weird concurrency errors. Sometime it crashes silently. This time I got lucky that it crashed with a stack trace that I can share and maybe you get to the bottom of this. If you don't have a similar setup available, it may be hard to reproduce as you really need two jobs accessing the same file at the same time to see this type of bug.
## Steps to reproduce the bug
I'm running a modified version of `run_glue.py` script adapted to my use case. I've seen the same problem when running some glue datasets as well (so it's not specific to loading the datasets from csv files).
## Expected results
No crash, concurrent access to the (intermediate) files just fine.
## Actual results
Crashes due to races/concurrency bugs.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 8.0.0
- Pandas version: 1.1.0
Stack trace that I just got with the crash (I've obfuscated some names, it should still be quite informative):
```
Running tokenizer on dataset: 0%| | 0/3 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "../../src/models//run_*******.py", line 600, in <module>
main()
File "../../src/models//run_*******.py", line 444, in main
raw_datasets = raw_datasets.map(
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2376, in map
return self._map_single(
File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 551, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2776, in _map_single
buf_writer, writer, tmp_file = init_buffer_and_writer()
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2696, in init_buffer_and_writer
tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(cache_file_name), delete=False)
File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 541, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/*******/cache-transformers//transformers/csv/default-ef9cd184210742a7/0.0.0/51cce309a08df9c4d82ffd9363bbe090bf173197fc01a71b034e8594995a1a58/tmps8l6j5yc'
```
As I ran 100s of experiments last year for an empirical paper, I ran into this type of bugs several times. I found several bandaid/work-arounds, e.g., run one job first that caches the dataset => eliminate concurrency; OR use unique caches => eliminate concurrency (but increase storage space), etc. and it all works fine.
I'd like to help you fixing this bug as it's really annoying to always apply the work arounds. Let me know what other info from my side could help you figure out the issue.
Thanks for your help!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4661/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/4661/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4658
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4658/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4658/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4658/events
|
https://github.com/huggingface/datasets/issues/4658
| 1,297,001,390
|
I_kwDODunzps5NTquu
| 4,658
|
Transfer CI tests to GitHub Actions
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,657,181,450,000
| 1,657,624,705,000
| 1,657,624,705,000
|
MEMBER
| null | null |
Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4658/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4657
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4657/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4657/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4657/events
|
https://github.com/huggingface/datasets/issues/4657
| 1,296,743,133
|
I_kwDODunzps5NSrrd
| 4,657
|
Add SQuAD2.0 Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"Hey, It's already present [here](https://huggingface.co/datasets/squad_v2) ",
"Hi! This dataset is indeed already available on the Hub. Closing."
] | 1,657,163,976,000
| 1,657,642,492,000
| 1,657,642,492,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *SQuAD2.0*
- **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.*
- **Paper:** *https://aclanthology.org/P18-2124.pdf*
- **Data:** *https://rajpurkar.github.io/SQuAD-explorer/*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4657/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4656
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4656/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4656/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4656/events
|
https://github.com/huggingface/datasets/issues/4656
| 1,296,740,266
|
I_kwDODunzps5NSq-q
| 4,656
|
Add Amazon-QA Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)."
] | 1,657,163,711,000
| 1,657,765,212,000
| 1,657,765,212,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa/tree/master/paper*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4656/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4655
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4655/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4655/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4655/events
|
https://github.com/huggingface/datasets/issues/4655
| 1,296,720,896
|
I_kwDODunzps5NSmQA
| 4,655
|
Simple Wikipedia
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/simple-wiki)."
] | 1,657,162,286,000
| 1,657,764,993,000
| 1,657,764,993,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Simple Wikipedia*
- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task", William Coster and David Kauchak (2011).*
- **Paper:** *https://aclanthology.org/P11-2117/*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4655/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4654
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4654/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4654/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4654/events
|
https://github.com/huggingface/datasets/issues/4654
| 1,296,716,119
|
I_kwDODunzps5NSlFX
| 4,654
|
Add Quora Question Triplets Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/QQP_triplets)."
] | 1,657,161,822,000
| 1,657,764,830,000
| 1,657,764,830,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Quora Question Triplets*
- **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.*
- **Paper:**
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4654/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4653
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4653/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4653/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4653/events
|
https://github.com/huggingface/datasets/issues/4653
| 1,296,702,834
|
I_kwDODunzps5NSh1y
| 4,653
|
Add Altlex dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)."
] | 1,657,160,582,000
| 1,657,764,759,000
| 1,657,764,759,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4653/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4652
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4652/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4652/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4652/events
|
https://github.com/huggingface/datasets/issues/4652
| 1,296,697,498
|
I_kwDODunzps5NSgia
| 4,652
|
Add Sentence Compression Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/sentence-compression)."
] | 1,657,160,026,000
| 1,657,764,708,000
| 1,657,764,708,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Sentence Compression*
- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*
- **Paper:** *https://www.aclweb.org/anthology/D13-1155/*
- **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4652/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4651
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4651/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4651/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4651/events
|
https://github.com/huggingface/datasets/issues/4651
| 1,296,689,414
|
I_kwDODunzps5NSekG
| 4,651
|
Add Flickr 30k Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)."
] | 1,657,159,148,000
| 1,657,764,585,000
| 1,657,764,585,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.*
- **Paper:** *https://transacl.org/ojs/index.php/tacl/article/view/229/33*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4651/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4650/events
|
https://github.com/huggingface/datasets/issues/4650
| 1,296,680,037
|
I_kwDODunzps5NScRl
| 4,650
|
Add SPECTER dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] | 1,657,158,092,000
| 1,657,764,469,000
| null |
NONE
| null | null |
## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4649/events
|
https://github.com/huggingface/datasets/issues/4649
| 1,296,673,712
|
I_kwDODunzps5NSauw
| 4,649
|
Add PAQ dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)"
] | 1,657,157,382,000
| 1,657,764,387,000
| 1,657,764,387,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *PAQ*
- **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*
- **Paper:** *https://arxiv.org/abs/2102.07033*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4649/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4648
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4648/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4648/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4648/events
|
https://github.com/huggingface/datasets/issues/4648
| 1,296,659,335
|
I_kwDODunzps5NSXOH
| 4,648
|
Add WikiAnswers dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)"
] | 1,657,155,997,000
| 1,657,764,220,000
| 1,657,764,220,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *https://github.com/afader/oqa#wikianswers-corpus*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4648/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4647/events
|
https://github.com/huggingface/datasets/issues/4647
| 1,296,311,270
|
I_kwDODunzps5NRCPm
| 4,647
|
Add Reddit dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false
| null |
[] |
[] | 1,657,136,958,000
| 1,657,136,958,000
| null |
NONE
| null | null |
## Adding a Dataset
- **Name:** *Reddit comments (2015-2018)*
- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.*
- **Paper:** *https://arxiv.org/abs/1904.06472*
- **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4647/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4642
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4642/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4642/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4642/events
|
https://github.com/huggingface/datasets/issues/4642
| 1,295,748,083
|
I_kwDODunzps5NO4vz
| 4,642
|
Streaming issue for ccdv/pubmed-summarization
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ",
"Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.",
"I've opened a PR on their Hub dataset to support streaming: https://huggingface.co/datasets/ccdv/pubmed-summarization/discussions/2"
] | 1,657,109,587,000
| 1,657,117,054,000
| 1,657,117,054,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/ccdv/pubmed-summarization
### Description
This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?
```
Status code: 400
Exception: FileNotFoundError
Message: https://huggingface.co/datasets/ccdv/pubmed-summarization/resolve/main/train.zip/train.txt
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4642/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4641
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4641/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4641/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4641/events
|
https://github.com/huggingface/datasets/issues/4641
| 1,295,633,250
|
I_kwDODunzps5NOcti
| 4,641
|
Dataset Viewer issue for kmfoda/booksum
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books/27681-chapters/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries/cliffnotes/The Last of the Mohicans/section_1_part_0.txt',\r\n 'book_id': 'The Last of the Mohicans.chapters 1-2',\r\n 'summary_id': 'chapters 1-2',\r\n 'content': None,\r\n 'summary': '{\"name\": \"Chapters 1-2\", \"url\": \"https://web.archive.org/web/20201101053205/https://www.cliffsnotes.com/literature/l/the-last-of-the-mohicans/summary-and-analysis/chapters-12\", \"summary\": \"Before any characters appear, the time and geography are made clear. Though it is the last war that England and France waged for a country that neither would retain, the wilderness between the forces still has to be...\r\n```\r\n\r\nI'm forcing the refresh of the preview. ",
"The preview appears as expected once the refresh forced.",
"Thank you @albertvillanova 🤗 !"
] | 1,657,103,896,000
| 1,657,113,928,000
| 1,657,108,686,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/kmfoda/booksum
### Description
A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to:
```
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/kmfoda/booksum/resolve/47953f583d6967f086cb16a2f4d2346e9834024d/test.csv')
```
I'm not sure why it says "Unauthorized" since it's just a bunch of CSV files in a repo
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4641/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4639
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4639/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4639/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4639/events
|
https://github.com/huggingface/datasets/issues/4639
| 1,295,367,322
|
I_kwDODunzps5NNbya
| 4,639
|
Add HaGRID -- HAnd Gesture Recognition Image Dataset
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false
| null |
[] |
[] | 1,657,093,292,000
| 1,657,093,292,000
| null |
MEMBER
| null | null |
## Adding a Dataset
- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset
- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.
- **Paper:** https://arxiv.org/abs/2206.08219
- **Data:** https://github.com/hukenovs/hagrid
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4639/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4637
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4637/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4637/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4637/events
|
https://github.com/huggingface/datasets/issues/4637
| 1,294,818,236
|
I_kwDODunzps5NLVu8
| 4,637
|
The "all" split breaks streaming
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I just pushed the test (to see if it impacts other tests).",
"It impacted the test `test_generator_based_download_and_prepare` and I have fixed this.\r\n\r\nSo that you can copy the test I implemented in my PR and then implement a fix for this issue that passes the test `tests/test_builder.py::test_builder_as_streaming_dataset`.",
"Hi @cakiki are you still interested in working on this? Are you planning to open a PR?",
"Hi @albertvillanova ! Sorry it took so long; I wanted to spend this weekend working on it."
] | 1,657,058,209,000
| 1,657,893,570,000
| null |
CONTRIBUTOR
| null | null |
## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True)
```
## Expected results
An iterator over all splits.
## Actual results
I had to do the following to achieve the desired result:
```python
from itertools import chain
ds = load_dataset('super_glue', 'wsc.fixed', streaming=True)
it = chain.from_iterable(ds.values())
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4637/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4636
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4636/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4636/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4636/events
|
https://github.com/huggingface/datasets/issues/4636
| 1,294,547,836
|
I_kwDODunzps5NKTt8
| 4,636
|
Add info in docs about behavior of download_config.num_proc
|
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,657,040,460,000
| 1,659,004,832,000
| 1,659,004,832,000
|
CONTRIBUTOR
| null | null |
**Is your feature request related to a problem? Please describe.**
I went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it.
**Describe the solution you'd like**
- Add note about how the default number of workers is 16. Related code:
https://github.com/huggingface/datasets/blob/7bcac0a6a0fc367cc068f184fa132b8de8dfa11d/src/datasets/download/download_manager.py#L299-L302
- Add note that if the number of workers is higher than the number of files to download, it won't use multiprocessing.
**Describe alternatives you've considered**
maybe it would also be nice to set `num_proc` = `num_files` when `num_proc` > `num_files`.
**Additional context**
...
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4636/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4635
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4635/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4635/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4635/events
|
https://github.com/huggingface/datasets/issues/4635
| 1,294,475,931
|
I_kwDODunzps5NKCKb
| 4,635
|
Dataset Viewer issue for vadis/sv-ident
|
{
"login": "e-tornike",
"id": 20404466,
"node_id": "MDQ6VXNlcjIwNDA0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tornike",
"html_url": "https://github.com/e-tornike",
"followers_url": "https://api.github.com/users/e-tornike/followers",
"following_url": "https://api.github.com/users/e-tornike/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions",
"organizations_url": "https://api.github.com/users/e-tornike/orgs",
"repos_url": "https://api.github.com/users/e-tornike/repos",
"events_url": "https://api.github.com/users/e-tornike/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tornike/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[2]: \r\n{'sentence': 'Im Falle von Umweltbelastungen kann selten eindeutig entschieden werden, ob Unbedenklichkeitswerte bereits erreicht oder überschritten sind, die die menschliche Gesundheit oder andere Wohlfahrts»güter« beeinträchtigen.',\r\n 'is_variable': 0,\r\n 'variable': [],\r\n 'research_data': [],\r\n 'doc_id': '51971',\r\n 'uuid': 'ee3d7f88-1a3e-4a59-997f-e986b544a604',\r\n 'lang': 'de'}\r\n```",
"~~I have forced the refresh of the split in the preview without success.~~\r\n\r\nI have forced the refresh of the split in the preview, and now it works.",
"Preview seems to work now. \r\n\r\nhttps://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation",
"OK, thank you @e-tornike.\r\n\r\nApparently, after forcing the refresh, we just had to wait a little until it is effectively refreshed. ",
"I'm closing this issue as it was solved after forcing the refresh of the split in the preview.",
"Thanks a lot! :)"
] | 1,657,036,093,000
| 1,657,091,613,000
| 1,657,091,534,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation
### Description
Error message when loading validation split in the viewer:
```
Status code: 400
Exception: Status400Error
Message: The split cache is empty.
```
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4635/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4634
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4634/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4634/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4634/events
|
https://github.com/huggingface/datasets/issues/4634
| 1,294,405,251
|
I_kwDODunzps5NJw6D
| 4,634
|
Can't load the Hausa audio dataset
|
{
"login": "moro23",
"id": 19976800,
"node_id": "MDQ6VXNlcjE5OTc2ODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19976800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moro23",
"html_url": "https://github.com/moro23",
"followers_url": "https://api.github.com/users/moro23/followers",
"following_url": "https://api.github.com/users/moro23/following{/other_user}",
"gists_url": "https://api.github.com/users/moro23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moro23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moro23/subscriptions",
"organizations_url": "https://api.github.com/users/moro23/orgs",
"repos_url": "https://api.github.com/users/moro23/repos",
"events_url": "https://api.github.com/users/moro23/events{/privacy}",
"received_events_url": "https://api.github.com/users/moro23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] | 1,657,032,456,000
| 1,663,078,052,000
| 1,663,078,052,000
|
NONE
| null | null |
common_voice_train = load_dataset("common_voice", "ha", split="train+validation")
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4634/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4632
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4632/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4632/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4632/events
|
https://github.com/huggingface/datasets/issues/4632
| 1,294,166,880
|
I_kwDODunzps5NI2tg
| 4,632
|
'sort' method sorts one column only
|
{
"login": "shachardon",
"id": 42108562,
"node_id": "MDQ6VXNlcjQyMTA4NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/42108562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shachardon",
"html_url": "https://github.com/shachardon",
"followers_url": "https://api.github.com/users/shachardon/followers",
"following_url": "https://api.github.com/users/shachardon/following{/other_user}",
"gists_url": "https://api.github.com/users/shachardon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shachardon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shachardon/subscriptions",
"organizations_url": "https://api.github.com/users/shachardon/orgs",
"repos_url": "https://api.github.com/users/shachardon/repos",
"events_url": "https://api.github.com/users/shachardon/events{/privacy}",
"received_events_url": "https://api.github.com/users/shachardon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] |
[
"Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?",
"Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ",
"That's unexpected, can you share the code you used to get this ?"
] | 1,657,020,326,000
| 1,657,195,592,000
| null |
NONE
| null | null |
The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4632/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4629
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4629/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4629/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4629/events
|
https://github.com/huggingface/datasets/issues/4629
| 1,293,418,800
|
I_kwDODunzps5NGAEw
| 4,629
|
Rename repo default branch to main
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,656,954,970,000
| 1,657,122,597,000
| 1,657,122,597,000
|
MEMBER
| null | null |
Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin:
Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches
Then:
```
git fetch origin main
git remote set-head origin -a
```
CC: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4629/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4629/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4626
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4626/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4626/events
|
https://github.com/huggingface/datasets/issues/4626
| 1,293,256,269
|
I_kwDODunzps5NFYZN
| 4,626
|
Add non-commercial licensing info for datasets for which we removed tags
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
| null |
[] |
[
"yep plus `license_details` also makes sense for this IMO"
] | 1,656,945,163,000
| 1,657,290,449,000
| null |
MEMBER
| null | null |
We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv)
We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4626/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4623
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4623/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4623/events
|
https://github.com/huggingface/datasets/issues/4623
| 1,293,042,894
|
I_kwDODunzps5NEkTO
| 4,623
|
Loading MNIST as Pytorch Dataset
|
{
"login": "jameschapman19",
"id": 56592797,
"node_id": "MDQ6VXNlcjU2NTkyNzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/56592797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameschapman19",
"html_url": "https://github.com/jameschapman19",
"followers_url": "https://api.github.com/users/jameschapman19/followers",
"following_url": "https://api.github.com/users/jameschapman19/following{/other_user}",
"gists_url": "https://api.github.com/users/jameschapman19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameschapman19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameschapman19/subscriptions",
"organizations_url": "https://api.github.com/users/jameschapman19/orgs",
"repos_url": "https://api.github.com/users/jameschapman19/repos",
"events_url": "https://api.github.com/users/jameschapman19/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameschapman19/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ",
"This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```",
"Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"
] | 1,656,934,390,000
| 1,656,945,650,000
| null |
NONE
| null | null |
## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and label
## Actual results
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module>
dataset[0]
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__
return self._getitem(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem
formatted_output = format_table(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested
mapped = [
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested
return function(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
python-BaseException
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Windows-10-10.0.22579-SP0
- Python version: 3.9.2
- PyArrow version: 8.0.0
- Pandas version: 1.4.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4623/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4621
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4621/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4621/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4621/events
|
https://github.com/huggingface/datasets/issues/4621
| 1,293,030,128
|
I_kwDODunzps5NEhLw
| 4,621
|
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,656,933,704,000
| 1,657,895,064,000
| 1,657,895,064,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either.
## Steps to reproduce the bug
### Clone an example dataset from the Hub
```bash
git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata
```
### Try to load it
```python
from datasets import load_dataset
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False)
```
or even just
```python
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True)
```
as `drop_labels=False` is a default value.
## Expected results
A DatasetDict object with two features: `"image"` and `"label"`.
## Actual results
```
Traceback (most recent call last):
File "/home/polina/workspace/datasets/debug.py", line 18, in <module>
ds = load_dataset(
File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset
builder_instance.download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example
return encode_nested_example(self, example)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example
{
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp>
{
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'label'
```
## Environment info
`datasets` master branch
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4621/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/4621/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4620
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4620/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4620/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4620/events
|
https://github.com/huggingface/datasets/issues/4620
| 1,292,797,878
|
I_kwDODunzps5NDoe2
| 4,620
|
Data type is not recognized when using datetime.time
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @mariosasko ",
"Hi, thanks for reporting! I'm investigating the issue."
] | 1,656,922,418,000
| 1,657,202,231,000
| 1,657,202,231,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
Creating a dataset from a pandas dataframe with `datetime.time` format generates an error.
## Steps to reproduce the bug
```python
import pandas as pd
from datetime import time
from datasets import Dataset
df = pd.DataFrame({"feature_name": [time(1, 1, 1)]})
dataset = Dataset.from_pandas(df)
```
## Expected results
The dataset should be created.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 823, in from_pandas
return cls(table, info=info, split=split)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 679, in __init__
inferred_features = Features.from_arrow_schema(arrow_table.schema)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1315, in generate_from_arrow_type
return Value(dtype=_arrow_to_datasets_dtype(pa_type))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 83, in _arrow_to_datasets_dtype
return f"time64[{arrow_type.unit}]"
AttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit'
```
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4620/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4619
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4619/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4619/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4619/events
|
https://github.com/huggingface/datasets/issues/4619
| 1,292,107,275
|
I_kwDODunzps5NA_4L
| 4,619
|
np arrays get turned into native lists
|
{
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```",
"I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?",
"I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved."
] | 1,656,784,497,000
| 1,656,880,027,000
| null |
NONE
| null | null |
## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datasets.load_dataset("glue", "mrpc")["validation"]
Reusing dataset glue (...)
100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s]
>>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False)
100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s]
>>> dataset2[0]["tmp"]
[0.5]
>>> type(dataset2[0]["tmp"])
<class 'list'>
```
## Expected results
`dataset2[0]["tmp"]` should be an `np.ndarray`.
## Actual results
It's a list.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: mac, though I'm pretty sure it happens on a linux machine too
- Python version: 3.9.7
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4619/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4618/events
|
https://github.com/huggingface/datasets/issues/4618
| 1,292,078,225
|
I_kwDODunzps5NA4yR
| 4,618
|
contribute data loading for object detection datasets with yolo data format
|
{
"login": "faizankshaikh",
"id": 8406903,
"node_id": "MDQ6VXNlcjg0MDY5MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizankshaikh",
"html_url": "https://github.com/faizankshaikh",
"followers_url": "https://api.github.com/users/faizankshaikh/followers",
"following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}",
"gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions",
"organizations_url": "https://api.github.com/users/faizankshaikh/orgs",
"repos_url": "https://api.github.com/users/faizankshaikh/repos",
"events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizankshaikh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] |
[
"Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?",
"@mariosasko sounds good to me!\r\n",
"Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script",
"1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader."
] | 1,656,775,319,000
| 1,658,412,644,000
| null |
NONE
| null | null |
**Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2))
**Describe the solution you'd like**
I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format.
**Describe alternatives you've considered**
The script can either be a standalone dataset builder, or a modified version of `ImageFolder`
**Additional context**
I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching 😄
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4618/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4612/events
|
https://github.com/huggingface/datasets/issues/4612
| 1,290,984,660
|
I_kwDODunzps5M8tzU
| 4,612
|
Release 2.3.0 broke custom iterable datasets
|
{
"login": "aapot",
"id": 19529125,
"node_id": "MDQ6VXNlcjE5NTI5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aapot",
"html_url": "https://github.com/aapot",
"followers_url": "https://api.github.com/users/aapot/followers",
"following_url": "https://api.github.com/users/aapot/following{/other_user}",
"gists_url": "https://api.github.com/users/aapot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aapot/subscriptions",
"organizations_url": "https://api.github.com/users/aapot/orgs",
"repos_url": "https://api.github.com/users/aapot/repos",
"events_url": "https://api.github.com/users/aapot/events{/privacy}",
"received_events_url": "https://api.github.com/users/aapot/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.",
"Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?",
"Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!"
] | 1,656,657,967,000
| 1,657,033,701,000
| 1,657,033,701,000
|
NONE
| null | null |
## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should return examples from the dataset
## Actual results
```
/usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess()
16 See https://github.com/fsspec/gcsfs/issues/379
17 """
---> 18 fsspec.asyn.iothread[0] = None
19 fsspec.asyn.loop[0] = None
20
AttributeError: module 'fsspec' has no attribute 'asyn'
```
## Environment info
- `datasets` version: 2.3.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4612/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4610
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4610/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4610/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4610/events
|
https://github.com/huggingface/datasets/issues/4610
| 1,290,603,827
|
I_kwDODunzps5M7Q0z
| 4,610
|
codeparrot/github-code failing to load
|
{
"login": "PyDataBlog",
"id": 29863388,
"node_id": "MDQ6VXNlcjI5ODYzMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/29863388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PyDataBlog",
"html_url": "https://github.com/PyDataBlog",
"followers_url": "https://api.github.com/users/PyDataBlog/followers",
"following_url": "https://api.github.com/users/PyDataBlog/following{/other_user}",
"gists_url": "https://api.github.com/users/PyDataBlog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PyDataBlog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PyDataBlog/subscriptions",
"organizations_url": "https://api.github.com/users/PyDataBlog/orgs",
"repos_url": "https://api.github.com/users/PyDataBlog/repos",
"events_url": "https://api.github.com/users/PyDataBlog/events{/privacy}",
"received_events_url": "https://api.github.com/users/PyDataBlog/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?",
"Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it",
"> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application",
"This simple workaround should fix: https://huggingface.co/datasets/codeparrot/github-code/discussions/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot/github-code `_split_generators` calls with such an argument.",
"I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?",
"Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https://huggingface.co/datasets/codeparrot/github-code/discussions/3",
"PR is merged, it's working now ! Closing this one :)",
"> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad."
] | 1,656,620,688,000
| 1,657,031,053,000
| 1,657,012,796,000
|
NONE
| null | null |
## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
```python
[3]: dataset = load_dataset("codeparrot/github-code")
No config specified, defaulting to: github-code/all-all
Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 dataset = load_dataset("codeparrot/github-code")
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager)
162 def _split_generators(self, dl_manager):
164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info(
165 _REPO_NAME,
166 timeout=100.0,
167 )
--> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info)
170 data_files = datasets.data_files.DataFilesDict.from_hf_repo(
171 patterns,
172 dataset_info=hfh_dataset_info,
173 )
175 files = dl_manager.download_and_extract(data_files["train"])
TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4610/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4609/events
|
https://github.com/huggingface/datasets/issues/4609
| 1,290,392,083
|
I_kwDODunzps5M6dIT
| 4,609
|
librispeech dataset has to download whole subset when specifing the split to use
|
{
"login": "sunhaozhepy",
"id": 73462159,
"node_id": "MDQ6VXNlcjczNDYyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunhaozhepy",
"html_url": "https://github.com/sunhaozhepy",
"followers_url": "https://api.github.com/users/sunhaozhepy/followers",
"following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}",
"gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions",
"organizations_url": "https://api.github.com/users/sunhaozhepy/orgs",
"repos_url": "https://api.github.com/users/sunhaozhepy/repos",
"events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunhaozhepy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.",
"Hi,\r\n\r\nThat's a great help. Thank you very much."
] | 1,656,607,104,000
| 1,657,662,272,000
| 1,657,662,272,000
|
NONE
| null | null |
## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
```
## Expected results
The split "train.clean.100" is downloaded.
## Actual results
All four splits in "clean" subset is downloaded.
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4609/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4606
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4606/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4606/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4606/events
|
https://github.com/huggingface/datasets/issues/4606
| 1,290,083,534
|
I_kwDODunzps5M5RzO
| 4,606
|
evaluation result changes after `datasets` version change
|
{
"login": "thnkinbtfly",
"id": 70014488,
"node_id": "MDQ6VXNlcjcwMDE0NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/70014488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thnkinbtfly",
"html_url": "https://github.com/thnkinbtfly",
"followers_url": "https://api.github.com/users/thnkinbtfly/followers",
"following_url": "https://api.github.com/users/thnkinbtfly/following{/other_user}",
"gists_url": "https://api.github.com/users/thnkinbtfly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thnkinbtfly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thnkinbtfly/subscriptions",
"organizations_url": "https://api.github.com/users/thnkinbtfly/orgs",
"repos_url": "https://api.github.com/users/thnkinbtfly/repos",
"events_url": "https://api.github.com/users/thnkinbtfly/events{/privacy}",
"received_events_url": "https://api.github.com/users/thnkinbtfly/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n"
] | 1,656,593,006,000
| 1,656,956,852,000
| null |
NONE
| null | null |
## Describe the bug
evaluation result changes after `datasets` version change
## Steps to reproduce the bug
1. Train a model on WikiAnn
2. reload the ckpt -> test accuracy becomes same as eval accuracy
3. such behavior is gone after downgrading `datasets`
https://colab.research.google.com/drive/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing
## Expected results
evaluation result shouldn't change before/after `datasets` version changes
## Actual results
evaluation result changes before/after `datasets` version changes
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: colab
- Python version: 3.7.13
- PyArrow version: 6.0.1
Q. How could the evaluation result change before/after `datasets` version changes?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4606/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4605
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4605/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4605/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4605/events
|
https://github.com/huggingface/datasets/issues/4605
| 1,290,058,970
|
I_kwDODunzps5M5Lza
| 4,605
|
Dataset Viewer issue for boris/gis_filtered
|
{
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).",
"I already did that, it returns error when using streaming",
"Oh, sorry, I misread. Looking at it. Maybe @huggingface/datasets or @SBrandeis ",
"I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`",
"This is indeed a bug in `datasets`. Parquet datasets in gated/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https://github.com/huggingface/datasets/pull/4608"
] | 1,656,591,814,000
| 1,657,110,859,000
| 1,657,110,859,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train
### Description
When I try to access this from the website I get this error:
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/boris/gis_filtered/resolve/80b805053ce61d4eb487b6b8d9095d775c2c466e/data/train/0000.parquet')
If I try to load with code I also get the same issue:
```python
dataset2_train=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"],split="train",streaming=True)
dataset2_validation=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"], split="validation",streaming=True)
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4605/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4603
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4603/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4603/events
|
https://github.com/huggingface/datasets/issues/4603
| 1,289,963,331
|
I_kwDODunzps5M40dD
| 4,603
|
CI fails recurrently and randomly on Windows
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[] | 1,656,586,798,000
| 1,656,595,345,000
| 1,656,595,345,000
|
MEMBER
| null | null |
As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4603/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4597/events
|
https://github.com/huggingface/datasets/issues/4597
| 1,288,672,007
|
I_kwDODunzps5Mz5MH
| 4,597
|
Streaming issue for financial_phrasebank
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)",
"Let's see if their license allows hosting their data on the Hub.",
"License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub."
] | 1,656,506,743,000
| 1,656,667,776,000
| 1,656,667,776,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset:
```
Server error
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4597/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4596/events
|
https://github.com/huggingface/datasets/issues/4596
| 1,288,381,735
|
I_kwDODunzps5MyyUn
| 4,596
|
Dataset Viewer issue for universal_dependencies
|
{
"login": "Jordy-VL",
"id": 16034009,
"node_id": "MDQ6VXNlcjE2MDM0MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jordy-VL",
"html_url": "https://github.com/Jordy-VL",
"followers_url": "https://api.github.com/users/Jordy-VL/followers",
"following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}",
"gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions",
"organizations_url": "https://api.github.com/users/Jordy-VL/orgs",
"repos_url": "https://api.github.com/users/Jordy-VL/repos",
"events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jordy-VL/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n"
] | 1,656,492,629,000
| 1,662,550,168,000
| 1,662,550,167,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4596/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4595
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4595/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4595/events
|
https://github.com/huggingface/datasets/issues/4595
| 1,288,275,976
|
I_kwDODunzps5MyYgI
| 4,595
|
Dataset Viewer issue with False positive PII redaction
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n",
"This was indeed a scraping issue which I assumed was a display issue; sorry about that!"
] | 1,656,486,957,000
| 1,656,491,381,000
| 1,656,491,269,000
|
CONTRIBUTOR
| null | null |
### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4595/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4594
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4594/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4594/events
|
https://github.com/huggingface/datasets/issues/4594
| 1,288,070,023
|
I_kwDODunzps5MxmOH
| 4,594
|
load_from_disk suggests incorrect fix when used to load DatasetDict
|
{
"login": "dvsth",
"id": 11157811,
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvsth",
"html_url": "https://github.com/dvsth",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"repos_url": "https://api.github.com/users/dvsth/repos",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[] | 1,656,466,801,000
| 1,656,475,424,000
| 1,656,475,424,000
|
NONE
| null | null |
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4594/timeline
|
not_planned
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4592
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4592/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4592/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4592/events
|
https://github.com/huggingface/datasets/issues/4592
| 1,288,029,377
|
I_kwDODunzps5MxcTB
| 4,592
|
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
|
{
"login": "faizankshaikh",
"id": 8406903,
"node_id": "MDQ6VXNlcjg0MDY5MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizankshaikh",
"html_url": "https://github.com/faizankshaikh",
"followers_url": "https://api.github.com/users/faizankshaikh/followers",
"following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}",
"gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions",
"organizations_url": "https://api.github.com/users/faizankshaikh/orgs",
"repos_url": "https://api.github.com/users/faizankshaikh/repos",
"events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizankshaikh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy/detect_chess_pieces\" dataset is here: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https://huggingface.co/docs/datasets/master/en/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/1",
"Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n\r\n\r\n\r\n",
"Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this."
] | 1,656,461,754,000
| 1,656,498,603,000
| 1,656,488,967,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/jalFaizy/detect_chess_pieces
### Description
I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py)
When I run the command
`$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs`
It gives the following error
```
Using custom data configuration default
Traceback (most recent call last):
File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module>
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main
service.run()
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run
for j, builder in enumerate(get_builders()):
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders
yield builder_cls(
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__
super().__init__(*args, **kwargs)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__
info = self.get_exported_dataset_info()
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info
return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos
return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory
dataset_infos_dict = {
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp>
config_name: DatasetInfo.from_dict(dataset_info_dict)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict
return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})
File "<string>", line 20, in __init__
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__
templates = [
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp>
template if isinstance(template, TaskTemplate) else task_template_from_dict(template)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict
return template.from_dict(task_template_dict)
AttributeError: 'NoneType' object has no attribute 'from_dict'
```
My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4592/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4591
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4591/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4591/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4591/events
|
https://github.com/huggingface/datasets/issues/4591
| 1,288,021,332
|
I_kwDODunzps5MxaVU
| 4,591
|
Can't push Images to hub with manual Dataset
|
{
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure."
] | 1,656,460,883,000
| 1,657,281,696,000
| 1,657,281,695,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated.
This happens even though the dataset is looking like decoded images:

and I use `embed_external_files=True` while `push_to_hub` (same with false)
## Steps to reproduce the bug
```python
from PIL import Image
from datasets import Image as ImageFeature
from datasets import Features,Dataset
#manually create dataset
feats=Features(
{
"images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True)
"input_image": ImageFeature(),
}
)
test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]}
test_dataset=Dataset.from_dict(test_data,features=feats)
print(test_dataset)
test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True)
# clear cache rm -r ~/.cache/huggingface
# remove "test.jpg" # remove to see that it is looking for image on the local path
test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="")
print(test_dataset)
print(test_dataset['train'][0])
```
## Expected results
should be able to push image bytes if dataset has `Image(decode=True)`
## Actual results
errors because it is trying to decode file from the non existing local path.
```
----> print(test_dataset['train'][0])
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
...
-> 3068 fp = builtins.open(filename, "rb")
3069 exclusive_fp = True
3071 try:
FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4591/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4589
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4589/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4589/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4589/events
|
https://github.com/huggingface/datasets/issues/4589
| 1,287,600,029
|
I_kwDODunzps5Mvzed
| 4,589
|
Permission denied: '/home/.cache' when load_dataset with local script
|
{
"login": "jiangh0",
"id": 24559732,
"node_id": "MDQ6VXNlcjI0NTU5NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/24559732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangh0",
"html_url": "https://github.com/jiangh0",
"followers_url": "https://api.github.com/users/jiangh0/followers",
"following_url": "https://api.github.com/users/jiangh0/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangh0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangh0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangh0/subscriptions",
"organizations_url": "https://api.github.com/users/jiangh0/orgs",
"repos_url": "https://api.github.com/users/jiangh0/repos",
"events_url": "https://api.github.com/users/jiangh0/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangh0/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[] | 1,656,433,563,000
| 1,656,483,988,000
| 1,656,483,908,000
|
NONE
| null | null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4589/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4581/events
|
https://github.com/huggingface/datasets/issues/4581
| 1,286,362,907
|
I_kwDODunzps5MrFcb
| 4,581
|
Dataset Viewer issue for pn_summary
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?",
"Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n",
"Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else."
] | 1,656,363,372,000
| 1,656,427,323,000
| 1,656,427,323,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation
### Description
Getting an index error on the `validation` and `test` splits:
```
Server error
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4581/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4580
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4580/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4580/events
|
https://github.com/huggingface/datasets/issues/4580
| 1,286,312,912
|
I_kwDODunzps5Mq5PQ
| 4,580
|
Dataset Viewer issue for multi_news
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.",
"I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt"
] | 1,656,361,525,000
| 1,656,425,328,000
| 1,656,425,328,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4580/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4578
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4578/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4578/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4578/events
|
https://github.com/huggingface/datasets/issues/4578
| 1,286,086,400
|
I_kwDODunzps5MqB8A
| 4,578
|
[Multi Configs] Use directories to differentiate between subsets/configurations
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] |
[
"I want to be able to create folders in a model.",
"How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?",
"> The document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?\r\n\r\nIt works the same - you just need to use local paths instead of URLs"
] | 1,656,348,911,000
| 1,672,744,721,000
| null |
MEMBER
| null | null |
Currently to define several subsets/configurations of your dataset, you need to use a dataset script.
However it would be nice to have a no-code way to to this.
For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration.
These structures are not supported right now, but would be nice to have:
```
my_dataset_repository/
├── README.md
├── en/
│ ├── train.csv
│ └── test.csv
└── fr/
├── train.csv
└── test.csv
```
Or with one directory per split:
```
my_dataset_repository/
├── README.md
├── en/
│ ├── train/
│ │ ├── shard_0.csv
│ │ └── shard_1.csv
│ └── test/
│ ├── shard_0.csv
│ └── shard_1.csv
└── fr/
├── train/
│ ├── shard_0.csv
│ └── shard_1.csv
└── test/
├── shard_0.csv
└── shard_1.csv
```
cc @stevhliu @albertvillanova
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4578/reactions",
"total_count": 10,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 4,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/datasets/issues/4578/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4575/events
|
https://github.com/huggingface/datasets/issues/4575
| 1,285,446,700
|
I_kwDODunzps5Mnlws
| 4,575
|
Problem about wmt17 zh-en dataset
|
{
"login": "winterfell2021",
"id": 85819194,
"node_id": "MDQ6VXNlcjg1ODE5MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/85819194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winterfell2021",
"html_url": "https://github.com/winterfell2021",
"followers_url": "https://api.github.com/users/winterfell2021/followers",
"following_url": "https://api.github.com/users/winterfell2021/following{/other_user}",
"gists_url": "https://api.github.com/users/winterfell2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winterfell2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winterfell2021/subscriptions",
"organizations_url": "https://api.github.com/users/winterfell2021/orgs",
"repos_url": "https://api.github.com/users/winterfell2021/repos",
"events_url": "https://api.github.com/users/winterfell2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/winterfell2021/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.",
"@albertvillanova @lhoestq Could you take a look at this issue?",
"@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the 'zh' item is none.",
"I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```",
"I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`"
] | 1,656,318,942,000
| 1,661,248,862,000
| 1,661,248,821,000
|
NONE
| null | null |
It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.
So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception:
```
Traceback (most recent call last):
File "train.py", line 78, in <module>
data = load_dataset(args.dataset, "zh-en")
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<c[hn]: string, en: string, zh: string>
to
struct<en: string, zh: string>
```
So the solution of this problem is to change the original array manually:
```
if 'c[hn]' in str(array.type):
py_array = array.to_pylist()
data_list = []
for vo in py_array:
tmp = {
'en': vo['en'],
}
if 'zh' not in vo:
tmp['zh'] = vo['c[hn]']
else:
tmp['zh'] = vo['zh']
data_list.append(tmp)
array = pa.array(data_list, type=pa.struct([
pa.field('en', pa.string()),
pa.field('zh', pa.string()),
]))
```
Therefore, maybe a correct version of original casia2015 file need to be updated
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4575/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4572/events
|
https://github.com/huggingface/datasets/issues/4572
| 1,285,022,499
|
I_kwDODunzps5Ml-Mj
| 4,572
|
Dataset Viewer issue for mlsum
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] | 1,656,275,057,000
| 1,658,407,201,000
| 1,658,407,201,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4572/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4571/events
|
https://github.com/huggingface/datasets/issues/4571
| 1,284,883,289
|
I_kwDODunzps5MlcNZ
| 4,571
|
Dataset Viewer issue for gsarti/flores_101
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?"
] | 1,656,242,349,000
| 1,662,624,998,000
| null |
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4571/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4570/events
|
https://github.com/huggingface/datasets/issues/4570
| 1,284,846,168
|
I_kwDODunzps5MlTJY
| 4,570
|
Dataset sharding non-contiguous?
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ",
"Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ",
"@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ",
"This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)."
] | 1,656,232,445,000
| 1,656,586,847,000
| 1,656,254,180,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made.
## Steps to reproduce the bug
```python
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dataset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"{num_shards=}")
for shard_index in range(num_shards):
shard = dataset.shard(num_shards=num_shards, index=shard_index)
shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet")
os.listdir('tokenized/')
```
## Expected results
I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example
## Actual results
Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4570/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4569
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4569/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4569/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4569/events
|
https://github.com/huggingface/datasets/issues/4569
| 1,284,833,694
|
I_kwDODunzps5MlQGe
| 4,569
|
Dataset Viewer issue for sst2
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ",
"Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)"
] | 1,656,228,774,000
| 1,656,311,868,000
| 1,656,311,868,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4569/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4568/events
|
https://github.com/huggingface/datasets/issues/4568
| 1,284,655,624
|
I_kwDODunzps5MkkoI
| 4,568
|
XNLI cache reload is very slow
|
{
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ",
"Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.",
"Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)."
] | 1,656,175,436,000
| 1,656,944,980,000
| 1,656,944,980,000
|
CONTRIBUTOR
| null | null |
### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet.
```
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
/opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
71
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
/opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags)
751 addrlist = []
--> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
753 af, socktype, proto, canonname, sa = res
gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
KeyboardInterrupt Traceback (most recent call last)
/tmp/ipykernel_33/3594208039.py in <module>
----> 1 load_dataset("xnli", "en")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1673 revision=revision,
1674 use_auth_token=use_auth_token,
-> 1675 **config_kwargs,
1676 )
1677
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1494 download_mode=download_mode,
1495 data_dir=data_dir,
-> 1496 data_files=data_files,
1497 )
1498
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1182 download_config=download_config,
1183 download_mode=download_mode,
-> 1184 dynamic_modules_path=dynamic_modules_path,
1185 ).get_module()
1186 elif path.count("/") == 1: # community dataset on the Hub
/opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path)
506 self.dynamic_modules_path = dynamic_modules_path
507 assert self.name.count("/") == 0
--> 508 increase_load_count(name, resource_type="dataset")
509
510 def download_loading_script(self, revision: Optional[str]) -> str:
/opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type)
166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS:
167 try:
--> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset"))
169 except Exception:
170 pass
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries)
93 return http_head(
94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset),
---> 95 max_retries=max_retries,
96 )
97
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
445 allow_redirects=allow_redirects,
446 timeout=timeout,
--> 447 max_retries=max_retries,
448 )
449 return response
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
366 tries += 1
367 try:
--> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
369 success = True
370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
527 }
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
530
531 return resp
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
643
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
646
647 # Total elapsed time of the request (approximately)
/opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 decode_content=False,
449 retries=self.max_retries,
--> 450 timeout=timeout
451 )
452
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
708 body=body,
709 headers=headers,
--> 710 chunked=chunked,
711 )
712
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
384 # Trigger any extra validation we need to do.
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
1038 # Force connect early to allow us to validate the connection.
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1041
1042 if not conn.is_verified:
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self)
356 def connect(self):
357 # Add certificate verification
--> 358 self.sock = conn = self._new_conn()
359 hostname = self.host
360 tls_in_tls = False
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
173 try:
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
177
KeyboardInterrupt:
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4568/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4566/events
|
https://github.com/huggingface/datasets/issues/4566
| 1,284,397,594
|
I_kwDODunzps5Mjloa
| 4,566
|
Document link #load_dataset_enhancing_performance points to nowhere
|
{
"login": "subercui",
"id": 11674033,
"node_id": "MDQ6VXNlcjExNjc0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subercui",
"html_url": "https://github.com/subercui",
"followers_url": "https://api.github.com/users/subercui/followers",
"following_url": "https://api.github.com/users/subercui/following{/other_user}",
"gists_url": "https://api.github.com/users/subercui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subercui/subscriptions",
"organizations_url": "https://api.github.com/users/subercui/orgs",
"repos_url": "https://api.github.com/users/subercui/repos",
"events_url": "https://api.github.com/users/subercui/events{/privacy}",
"received_events_url": "https://api.github.com/users/subercui/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?",
"https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works."
] | 1,656,119,899,000
| 1,674,578,020,000
| 1,674,578,020,000
|
NONE
| null | null |
## Describe the bug
A clear and concise description of what the bug is.

The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4566/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4565
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4565/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4565/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4565/events
|
https://github.com/huggingface/datasets/issues/4565
| 1,284,141,666
|
I_kwDODunzps5MinJi
| 4,565
|
Add UFSC OCPap dataset
|
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] |
[
"I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix"
] | 1,656,101,274,000
| 1,657,134,182,000
| 1,657,134,182,000
|
NONE
| null | null |
## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.org/10.2139/ssrn.4119212
- **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1
- **Motivation:** real data of pap stained oral cytology samples
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4565/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4562
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4562/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4562/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4562/events
|
https://github.com/huggingface/datasets/issues/4562
| 1,283,779,557
|
I_kwDODunzps5MhOvl
| 4,562
|
Dataset Viewer issue for allocine
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n",
"Let me have a look...",
"Thanks for the quick fix @albertvillanova ",
"Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).",
"> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)"
] | 1,656,078,638,000
| 1,656,311,972,000
| 1,656,089,081,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/allocine
### Description
Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:
```
Status code: 400
Exception: AttributeError
Message: 'TarContainedFile' object has no attribute 'readable'
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4562/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4556
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4556/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4556/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4556/events
|
https://github.com/huggingface/datasets/issues/4556
| 1,283,462,881
|
I_kwDODunzps5MgBbh
| 4,556
|
Dataset Viewer issue for conll2003
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Fixed, thanks."
] | 1,656,060,918,000
| 1,656,064,239,000
| 1,656,064,239,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/conll2003/viewer/conll2003/test
### Description
Seems like a cache problem with this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py'
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4556/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4555/events
|
https://github.com/huggingface/datasets/issues/4555
| 1,283,451,651
|
I_kwDODunzps5Mf-sD
| 4,555
|
Dataset Viewer issue for xtreme
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Fixed, thanks."
] | 1,656,060,368,000
| 1,656,064,245,000
| 1,656,064,245,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test
### Description
There seems to be a problem with the cache of this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/xtreme/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e/xtreme.py'
```
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4555/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4550
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4550/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4550/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4550/events
|
https://github.com/huggingface/datasets/issues/4550
| 1,282,374,441
|
I_kwDODunzps5Mb3sp
| 4,550
|
imdb source error
|
{
"login": "Muhtasham",
"id": 20128202,
"node_id": "MDQ6VXNlcjIwMTI4MjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muhtasham",
"html_url": "https://github.com/Muhtasham",
"followers_url": "https://api.github.com/users/Muhtasham/followers",
"following_url": "https://api.github.com/users/Muhtasham/following{/other_user}",
"gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions",
"organizations_url": "https://api.github.com/users/Muhtasham/orgs",
"repos_url": "https://api.github.com/users/Muhtasham/repos",
"events_url": "https://api.github.com/users/Muhtasham/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muhtasham/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n"
] | 1,655,989,372,000
| 1,655,992,025,000
| 1,655,992,024,000
|
NONE
| null | null |
## Describe the bug
imdb dataset not loading
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imdb")
```
## Expected results
## Actual results
```bash
06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source
06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0]
.....
ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))")))
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4550/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4549
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4549/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4549/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4549/events
|
https://github.com/huggingface/datasets/issues/4549
| 1,282,312,975
|
I_kwDODunzps5MbosP
| 4,549
|
FileNotFoundError when passing a data_file inside a directory starting with double underscores
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`",
"We're working on a fix ;)"
] | 1,655,986,764,000
| 1,656,599,898,000
| 1,656,599,898,000
|
MEMBER
| null | null |
Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true
This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4549/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4549/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4548/events
|
https://github.com/huggingface/datasets/issues/4548
| 1,282,218,096
|
I_kwDODunzps5MbRhw
| 4,548
|
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)"
] | 1,655,981,937,000
| 1,656,584,132,000
| 1,656,584,132,000
|
CONTRIBUTOR
| null | null |
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored.
This happens when a directory is structured like as follows:
```
train/
file_1.jpg
file_2.jpg
test/
file_3.jpg
file_4.jpg
metadata.jsonl
```
or like as follows:
```
train_file_1.jpg
train_file_2.jpg
test_file_3.jpg
test_file_4.jpg
metadata.jsonl
```
The same for HF repos.
because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29)
@lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4548/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4544
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4544/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4544/events
|
https://github.com/huggingface/datasets/issues/4544
| 1,280,500,340
|
I_kwDODunzps5MUuJ0
| 4,544
|
[CI] seqeval installation fails sometimes on python 3.6
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,655,915,723,000
| 1,655,979,224,000
| 1,655,979,224,000
|
MEMBER
| null | null |
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|███████▌ | 10 kB 42.1 MB/s eta 0:00:01
|███████████████ | 20 kB 53.3 MB/s eta 0:00:01
|██████████████████████▌ | 30 kB 67.2 MB/s eta 0:00:01
|██████████████████████████████ | 40 kB 76.1 MB/s eta 0:00:01
|████████████████████████████████| 43 kB 10.0 MB/s
Preparing metadata (setup.py) ... - error
ERROR: Command errored out with exit status 1:
command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy
cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/
Complete output (22 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module>
'Programming Language :: Python :: Implementation :: PyPy'
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup
return distutils.core.setup(**attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__
k: v for k, v in attrs.items()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__
self.finalize_options()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options
ep.load()(self, ep.name, value)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load
return self.resolve()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300
Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT
This could be caused by the latest updates of setuptools-scm
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4544/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4542
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4542/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4542/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4542/events
|
https://github.com/huggingface/datasets/issues/4542
| 1,280,269,445
|
I_kwDODunzps5MT1yF
| 4,542
|
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
open
| false
| null |
[] |
[
"This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ",
"cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!",
"Noted and I will look into the thread in detail tomorrow once I log back in. ",
"@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ",
"> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok",
"So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ",
"> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)",
"Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ",
"@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ",
"Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example",
"@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```",
"@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ",
"Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types",
"If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.",
"> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?",
"> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ",
"> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^",
"Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).",
"Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ",
"@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?",
"> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.",
"If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?",
"@lhoestq why one would convert to TFRecords after unbatching? ",
"> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ",
"Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)",
"> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ",
"I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ",
"Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ",
"Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)",
"Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. "
] | 1,655,908,920,000
| 1,665,477,945,000
| null |
MEMBER
| null | null |
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset
Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library.
Here are a few points to explore
- [ ] check the performance of ArrowFeatherDataset in tf.data
- [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc.
We would also need to implement sharding when loading a dataset (this will be done anyway for #546)
cc @Rocketknight1 @gante feel free to comment in case I missed anything !
I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4542/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4540/events
|
https://github.com/huggingface/datasets/issues/4540
| 1,280,142,942
|
I_kwDODunzps5MTW5e
| 4,540
|
Avoid splitting by` .py` for the file.
|
{
"login": "espoirMur",
"id": 18573157,
"node_id": "MDQ6VXNlcjE4NTczMTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/18573157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/espoirMur",
"html_url": "https://github.com/espoirMur",
"followers_url": "https://api.github.com/users/espoirMur/followers",
"following_url": "https://api.github.com/users/espoirMur/following{/other_user}",
"gists_url": "https://api.github.com/users/espoirMur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/espoirMur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/espoirMur/subscriptions",
"organizations_url": "https://api.github.com/users/espoirMur/orgs",
"repos_url": "https://api.github.com/users/espoirMur/repos",
"events_url": "https://api.github.com/users/espoirMur/events{/privacy}",
"received_events_url": "https://api.github.com/users/espoirMur/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
|
{
"login": "VijayKalmath",
"id": 20517962,
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VijayKalmath",
"html_url": "https://github.com/VijayKalmath",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "VijayKalmath",
"id": 20517962,
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VijayKalmath",
"html_url": "https://github.com/VijayKalmath",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)",
"I will have a look.. \r\n\r\nThis weekend .. ",
"@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ",
"#self-assign"
] | 1,655,904,415,000
| 1,657,199,864,000
| 1,657,199,864,000
|
NONE
| null | null |
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272
Hello,
Thanks you for this library .
I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory.
Step to reproduce.
- If you have a home folder which ends with `.py`
- load a module with a local folder
`qa_dataset = load_dataset("src/data/build_qa_dataset.py")`
it is failed
A possible workaround would be to use pathlib at the mentioned line
` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue .
Let me what are your thought on this and I can try to fix it by A PR.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4540/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4538/events
|
https://github.com/huggingface/datasets/issues/4538
| 1,279,409,786
|
I_kwDODunzps5MQj56
| 4,538
|
Dataset Viewer issue for Pile of Law
|
{
"login": "Breakend",
"id": 1609857,
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Breakend",
"html_url": "https://github.com/Breakend",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"repos_url": "https://api.github.com/users/Breakend/repos",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @Breakend, yes – we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] | 1,655,866,120,000
| 1,656,315,023,000
| 1,656,282,382,000
|
NONE
| null | null |
### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4533
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4533/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4533/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4533/events
|
https://github.com/huggingface/datasets/issues/4533
| 1,277,211,490
|
I_kwDODunzps5MILNi
| 4,533
|
Timestamp not returned as datetime objects in streaming mode
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,655,746,127,000
| 1,655,915,349,000
| 1,655,915,349,000
|
MEMBER
| null | null |
As reported in (internal) https://github.com/huggingface/datasets-server/issues/397
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("ett", name="h2", split="test", streaming=True)
>>> d = next(iter(dataset))
>>> d['start']
Timestamp('2016-07-01 00:00:00')
```
while loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4533/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4533/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4531
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4531/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4531/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4531/events
|
https://github.com/huggingface/datasets/issues/4531
| 1,277,054,172
|
I_kwDODunzps5MHkzc
| 4,531
|
Dataset Viewer issue for CSV datasets
|
{
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"this should now be fixed",
"Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 à 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n"
] | 1,655,736,984,000
| 1,655,800,126,000
| 1,655,800,107,000
|
CONTRIBUTOR
| null | null |
### Link
https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin
### Description
I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well.
You can replicate the problem by simply uploading any CSV dataset.
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4531/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4529
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4529/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4529/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4529/events
|
https://github.com/huggingface/datasets/issues/4529
| 1,276,729,303
|
I_kwDODunzps5MGVfX
| 4,529
|
Ecoset
|
{
"login": "DiGyt",
"id": 34550289,
"node_id": "MDQ6VXNlcjM0NTUwMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DiGyt",
"html_url": "https://github.com/DiGyt",
"followers_url": "https://api.github.com/users/DiGyt/followers",
"following_url": "https://api.github.com/users/DiGyt/following{/other_user}",
"gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions",
"organizations_url": "https://api.github.com/users/DiGyt/orgs",
"repos_url": "https://api.github.com/users/DiGyt/repos",
"events_url": "https://api.github.com/users/DiGyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/DiGyt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false
| null |
[] |
[
"Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it."
] | 1,655,721,574,000
| 1,655,828,236,000
| null |
NONE
| null | null |
## Adding a Dataset
- **Name:** *Ecoset*
- **Description:** *https://www.kietzmannlab.org/ecoset/*
- **Paper:** *https://doi.org/10.1073/pnas.2011417118*
- **Data:** *https://codeocean.com/capsule/9570390/tree/v1*
- **Motivation:**
**Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**.
It is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like:
- more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds)
- less NSFW content
- 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models.
I am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https://discuss.huggingface.co/t/handling-large-image-datasets/19373).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4529/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4529/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4528
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4528/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4528/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4528/events
|
https://github.com/huggingface/datasets/issues/4528
| 1,276,679,155
|
I_kwDODunzps5MGJPz
| 4,528
|
Memory leak when iterating a Dataset
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Is someone assigned to this issue?",
"The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n",
"Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimport os, psutil\r\n\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\ncorpus = load_dataset(\"BeIR/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus']\r\ncorpus = corpus.select(range(200000))\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\nbatch = None\r\n\r\nmem_before_start = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n batch = corpus[i:i+step]\r\n import objgraph\r\n #objgraph.show_refs([batch])\r\n #objgraph.show_refs([corpus])\r\n #sys.exit()\r\n gc.collect()\r\n\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n print(f\"{i:6d} {mem_after - mem_before:12.4f} {mem_after - mem_before_start:12.4f}\")\r\n\r\n```\r\n\r\nLet's run:\r\n\r\n```\r\n$ python ds2.py\r\n 0 36.5391 36.5391\r\n 20000 10.4609 47.0000\r\n 40000 5.9766 52.9766\r\n 60000 7.8906 60.8672\r\n 80000 6.0586 66.9258\r\n100000 8.4453 75.3711\r\n120000 6.7422 82.1133\r\n140000 8.5664 90.6797\r\n160000 5.7344 96.4141\r\n180000 8.3398 104.7539\r\n```\r\n\r\nYou can see the last column of total RSS memory keeps on growing in MBs. The mid column is by how much it was grown during a single iteration of the repro script (20000 items)",
"@NouamaneTazi, please check my analysis here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242599722 so if you agree with my research this Issue can be closed as well.\r\n\r\nI also made a suggestion at how to proceed to hunt for a real leak here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242600626\r\n\r\nyou may find this one to be useful as well https://github.com/huggingface/datasets/issues/4883#issuecomment-1242597966",
"Amazing job! Thanks for taking time to debug this 🤗\r\n\r\nFor my side, I tried to do some more research as well, but to no avail. https://github.com/huggingface/datasets/issues/4883#issuecomment-1243415957"
] | 1,655,719,394,000
| 1,662,972,699,000
| 1,662,972,699,000
|
MEMBER
| null | null |
e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # output: 633507840 bytes
corpus = load_dataset("BeIR/msmarco", 'corpus', keep_in_memory=False, streaming=False)['corpus'] # or "BeIR/trec-covid" for a smaller dataset
print(process.memory_info().rss) # output: 698601472 bytes
logger.info("Applying method to all examples in all splits")
for i in trange(0, len(corpus), 1000):
batch = corpus[i:i+1000]
data = pyarrow.total_allocated_bytes()
if data > 0:
logger.info(f"{i}/{len(corpus)}: {data}")
print(process.memory_info().rss) # output: 3788247040 bytes
del batch
gc.collect()
print(process.memory_info().rss) # output: 3788247040 bytes
logger.info("Done...")
time.sleep(100)
```
## Expected results
Limited memory usage, and memory to be freed after processing
## Actual results
Memory leak

You can see how the memory allocation keeps increasing until it reaches a steady state when we hit the `time.sleep(100)`, which showcases that even the garbage collector couldn't free the allocated memory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4528/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4527
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4527/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4527/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4527/events
|
https://github.com/huggingface/datasets/issues/4527
| 1,276,583,536
|
I_kwDODunzps5MFx5w
| 4,527
|
Dataset Viewer issue for vadis/sv-ident
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Fixed, thanks!\r\n![Uploading Capture d’écran 2022-06-21 à 18.42.40.png…]()\r\n\r\n"
] | 1,655,714,862,000
| 1,655,829,766,000
| 1,655,829,765,000
|
MEMBER
| null | null |
### Link
https://huggingface.co/datasets/vadis/sv-ident
### Description
The dataset preview does not work:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
However, the dataset is streamable and works locally:
```python
In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item
Using custom data configuration default
Out[1]:
{'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.',
'is_variable': 1,
'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'],
'research_data': ['ZA5400'],
'doc_id': '73106',
'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10',
'lang': 'en'}
```
CC: @e-tornike
### Owner
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4527/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4527/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4526
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4526/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4526/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4526/events
|
https://github.com/huggingface/datasets/issues/4526
| 1,276,580,185
|
I_kwDODunzps5MFxFZ
| 4,526
|
split cache used when processing different split
|
{
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)",
"Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE"
] | 1,655,714,698,000
| 1,656,425,098,000
| null |
CONTRIBUTOR
| null | null |
## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
def train_dataloader(self):
ds = load_dataset('squad', split='train')
ds = ds.map(some_function)
return [ds]
def val_dataloader(self):
ds = load_dataset('squad', split="validation")
ds = ds.map(some_function)
return [ds]
```
I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue.
If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4526/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4525
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4525/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4525/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4525/events
|
https://github.com/huggingface/datasets/issues/4525
| 1,276,491,386
|
I_kwDODunzps5MFbZ6
| 4,525
|
Out of memory error on workers while running Beam+Dataflow
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?",
"@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.",
"Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.",
"I asked my colleague who ran the code and he said apache beam.",
"@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?",
"Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368",
"> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ",
"OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). "
] | 1,655,710,092,000
| 1,656,581,637,000
| null |
MEMBER
| null | null |
## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently workers run out of memory while processing them.
Any help/hint is welcome!
Error message:
```
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
Info from the Diagnostics tab:
```
Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900
The worker VM had to shut down one or more processes due to lack of memory.
```
## Additional information
### Stack trace
```
Traceback (most recent call last):
File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run
builder.download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare
pipeline_results.wait_until_finish()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish
raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
### Logs
```
Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0
Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service.
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4525/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4524
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4524/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4524/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4524/events
|
https://github.com/huggingface/datasets/issues/4524
| 1,275,909,186
|
I_kwDODunzps5MDNRC
| 4,524
|
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
|
{
"login": "dan-the-meme-man",
"id": 45244059,
"node_id": "MDQ6VXNlcjQ1MjQ0MDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/45244059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dan-the-meme-man",
"html_url": "https://github.com/dan-the-meme-man",
"followers_url": "https://api.github.com/users/dan-the-meme-man/followers",
"following_url": "https://api.github.com/users/dan-the-meme-man/following{/other_user}",
"gists_url": "https://api.github.com/users/dan-the-meme-man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dan-the-meme-man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dan-the-meme-man/subscriptions",
"organizations_url": "https://api.github.com/users/dan-the-meme-man/orgs",
"repos_url": "https://api.github.com/users/dan-the-meme-man/repos",
"events_url": "https://api.github.com/users/dan-the-meme-man/events{/privacy}",
"received_events_url": "https://api.github.com/users/dan-the-meme-man/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] |
[
"Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue.",
"As I continued working on this today, I came to suspect that it is in fact an out of memory issue - I have a few more notebooks that I've left running, and if they produce the same error, I will try to get the logs. In the meantime, if there's any chance that there is a repo out there with those three languages already as .arrow files, or if you know about how much memory would be needed to actually download those sets, please let me know!"
] | 1,655,595,405,000
| 1,655,771,900,000
| null |
NONE
| null | null |
## Describe the bug
When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs.
## Steps to reproduce the bug
```python
# bash commands
!pip install datasets
!pip install apache-beam[interactive]
!pip install mwparserfromhell
!pip install dill==0.3.5.1
!pip install requests==2.23.0
# imports
import os
from datasets import load_dataset
import apache_beam as beam
import mwparserfromhell
from google.colab import drive
import dill
import requests
# mount drive
drive_dir = os.path.join(os.getcwd(), 'drive')
drive.mount(drive_dir)
# confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands
print(dill.__version__)
print(requests.__version__)
lang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue
lang_dir = os.path.join(drive_dir, 'path/to/my/folder', lang)
if not os.path.exists(lang_dir):
x = None
x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink',
split='train')
x.save_to_disk(lang_dir)
```
## Expected results
Although some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error.
## Actual results
Traceback below:
```
Exception in thread run_worker_3-1:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 234, in run
for work_request in self._control_stub.Control(get_responses()):
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__
return self._next()
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string = "{"created":"@1655593643.871830638","description":"Error received from peer ipv4:127.0.0.1:44441","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Socket closed","grpc_status":14}"
>
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__
self._cache[target_window] = self._side_input_data.view_fn(raw_view)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda>
lambda iterable: from_runtime_iterable(iterable, view_options))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable
head = list(itertools.islice(it, 2))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator
self._underlying.get_raw(state_key, continuation_token))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw
continuation_token=continuation_token)))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request
raise RuntimeError(response.error)
RuntimeError: Unknown process bundle instruction id '26'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute
response = task()
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction
getattr(request, request_type), request.instruction_id)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle
element.data)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__
self._cache[target_window] = self._side_input_data.view_fn(raw_view)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda>
lambda iterable: from_runtime_iterable(iterable, view_options))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable
head = list(itertools.islice(it, 2))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator
self._underlying.get_raw(state_key, continuation_token))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw
continuation_token=continuation_token)))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request
raise RuntimeError(response.error)
RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
ERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__
self._cache[target_window] = self._side_input_data.view_fn(raw_view)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda>
lambda iterable: from_runtime_iterable(iterable, view_options))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable
head = list(itertools.islice(it, 2))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator
self._underlying.get_raw(state_key, continuation_token))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw
continuation_token=continuation_token)))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request
raise RuntimeError(response.error)
RuntimeError: Unknown process bundle instruction id '26'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute
response = task()
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction
getattr(request, request_type), request.instruction_id)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle
element.data)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive
File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__
self._cache[target_window] = self._side_input_data.view_fn(raw_view)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda>
lambda iterable: from_runtime_iterable(iterable, view_options))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable
head = list(itertools.islice(it, 2))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator
self._underlying.get_raw(state_key, continuation_token))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw
continuation_token=continuation_token)))
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request
raise RuntimeError(response.error)
RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
ERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled
ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs
for elements in elements_iterator:
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__
return self._next()
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Multiplexer hanging up"
debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}"
>
Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 651, in <lambda>
target=lambda: self._read_inputs(elements_iterator),
File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs
for elements in elements_iterator:
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__
return self._next()
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Multiplexer hanging up"
debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}"
>
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[/tmp/ipykernel_219/3869142325.py](https://localhost:8080/#) in <module>
18 x = None
19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink',
---> 20 split='train')
21 x.save_to_disk(lang_dir)
3 frames
[/usr/local/lib/python3.7/dist-packages/apache_beam/runners/portability/portable_runner.py](https://localhost:8080/#) in wait_until_finish(self, duration)
604
605 if self._runtime_exception:
--> 606 raise self._runtime_exception
607
608 return self._state
RuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4524/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4522
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4522/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4522/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4522/events
|
https://github.com/huggingface/datasets/issues/4522
| 1,274,929,328
|
I_kwDODunzps5L_eCw
| 4,522
|
Try to reduce the number of datasets that require manual download
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,655,466,123,000
| 1,655,466,768,000
| null |
CONTRIBUTOR
| null | null |
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore
from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4522/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4521
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4521/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4521/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4521/events
|
https://github.com/huggingface/datasets/issues/4521
| 1,274,919,437
|
I_kwDODunzps5L_boN
| 4,521
|
Datasets method `.map` not hashing
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219",
"Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambda-x-foox",
"Thank @nalzok . That works for me:\r\n\r\n`pip install \"dill<0.3.5\"`"
] | 1,655,465,470,000
| 1,659,614,896,000
| 1,656,422,585,000
|
CONTRIBUTOR
| null | null |
## Describe the bug
Datasets method `.map` not hashing, even with an empty no-op function
## Steps to reproduce the bug
```python
from datasets import load_dataset
# download 9MB dummy dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
def prepare_dataset(batch):
return(batch)
ds = ds.map(
prepare_dataset,
num_proc=1,
desc="preprocess train dataset",
)
```
## Expected results
Hashed and cached dataset preprocessing
## Actual results
Does not hash properly:
```
Parameter 'function'=<function prepare_dataset at 0x7fccb68e9280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
cc @lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4521/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4520
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4520/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4520/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4520/events
|
https://github.com/huggingface/datasets/issues/4520
| 1,274,879,180
|
I_kwDODunzps5L_RzM
| 4,520
|
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine",
"Thank you!"
] | 1,655,462,837,000
| 1,656,427,637,000
| 1,656,425,069,000
|
CONTRIBUTOR
| null | null |
Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method:
```python
phoneme_language = data_args.phoneme_language
```
in the example https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L603-L630
## Steps to reproduce the bug
```python
from dataclasses import dataclass, field
from datasets.fingerprint import Hasher
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
phoneme_language: str = field(
default=None, metadata={"help": "The name of the phoneme language to use."}
)
data_args = DataTrainingArguments(phoneme_language ="foo")
Hasher.hash(data_args)
phoneme_language = data_args.phoneme_language
Hasher.hash(phoneme_language)
```
## Expected results
A hash.
## Actual results
<details>
<summary> Traceback </summary>
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [1], in <cell line: 16>()
10 phoneme_language: str = field(
11 default=None, metadata={"help": "The name of the phoneme language to use."}
12 )
14 data_args = DataTrainingArguments(phoneme_language ="foo")
---> 16 Hasher.hash(data_args)
18 phoneme_language = data_args. phoneme_language
20 Hasher.hash(phoneme_language)
File ~/datasets/src/datasets/fingerprint.py:237, in Hasher.hash(cls, value)
235 return cls.dispatch[type(value)](cls, value)
236 else:
--> 237 return cls.hash_default(value)
File ~/datasets/src/datasets/fingerprint.py:230, in Hasher.hash_default(cls, value)
228 @classmethod
229 def hash_default(cls, value: Any) -> str:
--> 230 return cls.hash_bytes(dumps(value))
File ~/datasets/src/datasets/utils/py_utils.py:564, in dumps(obj)
562 file = StringIO()
563 with _no_cache_fields(obj):
--> 564 dump(obj, file)
565 return file.getvalue()
File ~/datasets/src/datasets/utils/py_utils.py:539, in dump(obj, file)
537 def dump(obj, file):
538 """pickle an object to a file"""
--> 539 Pickler(file, recurse=True).dump(obj)
540 return
File ~/hf/lib/python3.8/site-packages/dill/_dill.py:620, in Pickler.dump(self, obj)
618 raise PicklingError(msg)
619 else:
--> 620 StockPickler.dump(self, obj)
621 return
File /usr/lib/python3.8/pickle.py:487, in _Pickler.dump(self, obj)
485 if self.proto >= 4:
486 self.framer.start_framing()
--> 487 self.save(obj)
488 self.write(STOP)
489 self.framer.end_framing()
File /usr/lib/python3.8/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id)
599 raise PicklingError("Tuple returned by %s must have "
600 "two to six elements" % reduce)
602 # Save the reduce() output and finally memoize the object
--> 603 self.save_reduce(obj=obj, *rv)
File /usr/lib/python3.8/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
684 raise PicklingError(
685 "args[0] from __newobj__ args has the wrong class")
686 args = args[1:]
--> 687 save(cls)
688 save(args)
689 write(NEWOBJ)
File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)
558 f = self.dispatch.get(t)
559 if f is not None:
--> 560 f(self, obj) # Call unbound method with explicit self
561 return
563 # Check private dispatch table if any, or else
564 # copyreg.dispatch_table
File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1838, in save_type(pickler, obj, postproc_list)
1836 postproc_list = []
1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name)))
-> 1838 _save_with_postproc(pickler, (_create_type, (
1839 type(obj), obj.__name__, obj.__bases__, _dict
1840 )), obj=obj, postproc_list=postproc_list)
1841 log.info("# %s" % _t)
1842 else:
File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list)
1137 pickler._postproc[id(obj)] = postproc_list
1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations
-> 1140 pickler.save_reduce(*reduction, obj=obj)
1142 if is_pickler_dill:
1143 # pickler.x -= 1
1144 # print(pickler.x*' ', 'pop', obj, id(obj))
1145 postproc = pickler._postproc.pop(id(obj))
File /usr/lib/python3.8/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
690 else:
691 save(func)
--> 692 save(args)
693 write(REDUCE)
695 if obj is not None:
696 # If the object is already in the memo, this means it is
697 # recursive. In this case, throw away everything we put on the
698 # stack, and fetch the object back from the memo.
File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)
558 f = self.dispatch.get(t)
559 if f is not None:
--> 560 f(self, obj) # Call unbound method with explicit self
561 return
563 # Check private dispatch table if any, or else
564 # copyreg.dispatch_table
File /usr/lib/python3.8/pickle.py:901, in _Pickler.save_tuple(self, obj)
899 write(MARK)
900 for element in obj:
--> 901 save(element)
903 if id(obj) in memo:
904 # Subtle. d was not in memo when we entered save_tuple(), so
905 # the process of saving the tuple's elements must have saved
(...)
909 # could have been done in the "for element" loop instead, but
910 # recursive tuples are a rare thing.
911 get = self.get(memo[id(obj)][0])
File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)
558 f = self.dispatch.get(t)
559 if f is not None:
--> 560 f(self, obj) # Call unbound method with explicit self
561 return
563 # Check private dispatch table if any, or else
564 # copyreg.dispatch_table
File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1251, in save_module_dict(pickler, obj)
1248 if is_dill(pickler, child=False) and pickler._session:
1249 # we only care about session the first pass thru
1250 pickler._first_pass = False
-> 1251 StockPickler.save_dict(pickler, obj)
1252 log.info("# D2")
1253 return
File /usr/lib/python3.8/pickle.py:971, in _Pickler.save_dict(self, obj)
968 self.write(MARK + DICT)
970 self.memoize(obj)
--> 971 self._batch_setitems(obj.items())
File /usr/lib/python3.8/pickle.py:997, in _Pickler._batch_setitems(self, items)
995 for k, v in tmp:
996 save(k)
--> 997 save(v)
998 write(SETITEMS)
999 elif n:
File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id)
558 f = self.dispatch.get(t)
559 if f is not None:
--> 560 f(self, obj) # Call unbound method with explicit self
561 return
563 # Check private dispatch table if any, or else
564 # copyreg.dispatch_table
File ~/datasets/src/datasets/utils/py_utils.py:862, in save_function(pickler, obj)
859 if state_dict:
860 state = state, state_dict
--> 862 dill._dill._save_with_postproc(
863 pickler,
864 (
865 dill._dill._create_function,
866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure),
867 state,
868 ),
869 obj=obj,
870 postproc_list=postproc_list,
871 )
872 else:
873 closure = obj.func_closure
File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list)
1151 dest, source = reduction[1]
1152 if source:
-> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0]))
1154 pickler._batch_setitems(iter(source.items()))
1155 else:
1156 # Updating with an empty dictionary. Same as doing nothing.
KeyError: 140434581781568
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
cc @lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4520/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4514
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4514/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4514/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4514/events
|
https://github.com/huggingface/datasets/issues/4514
| 1,273,505,230
|
I_kwDODunzps5L6CXO
| 4,514
|
Allow .JPEG as a file extension
|
{
"login": "DiGyt",
"id": 34550289,
"node_id": "MDQ6VXNlcjM0NTUwMjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DiGyt",
"html_url": "https://github.com/DiGyt",
"followers_url": "https://api.github.com/users/DiGyt/followers",
"following_url": "https://api.github.com/users/DiGyt/following{/other_user}",
"gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions",
"organizations_url": "https://api.github.com/users/DiGyt/orgs",
"repos_url": "https://api.github.com/users/DiGyt/repos",
"events_url": "https://api.github.com/users/DiGyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/DiGyt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] |
[
"Hi, thanks for reporting! I've opened a PR with the fix.",
"Wow, that was quick! Thank you very much 🙏 "
] | 1,655,382,980,000
| 1,655,713,126,000
| 1,655,399,500,000
|
NONE
| null | null |
## Describe the bug
When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.
## Steps to reproduce the bug
```python
# use bash to create 2 sham datasets with jpeg and JPEG ext
!mkdir dataset_a
!mkdir dataset_b
!wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg
!cp example_img.jpeg ./dataset_a/
!mv example_img.jpeg ./dataset_b/example_img.JPEG
from datasets import load_dataset
# working
df1 = load_dataset("./dataset_a", ignore_verifications=True)
#not working
df2 = load_dataset("./dataset_b", ignore_verifications=True)
# show
print(df1, df2)
```
## Expected results
```
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 1
})
}) DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 1
})
})
```
## Actual results
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4514/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4508
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4508/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4508/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4508/events
|
https://github.com/huggingface/datasets/issues/4508
| 1,272,718,921
|
I_kwDODunzps5L3CZJ
| 4,508
|
cast_storage method from datasets.features
|
{
"login": "romainremyb",
"id": 67968596,
"node_id": "MDQ6VXNlcjY3OTY4NTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/67968596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/romainremyb",
"html_url": "https://github.com/romainremyb",
"followers_url": "https://api.github.com/users/romainremyb/followers",
"following_url": "https://api.github.com/users/romainremyb/following{/other_user}",
"gists_url": "https://api.github.com/users/romainremyb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/romainremyb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/romainremyb/subscriptions",
"organizations_url": "https://api.github.com/users/romainremyb/orgs",
"repos_url": "https://api.github.com/users/romainremyb/repos",
"events_url": "https://api.github.com/users/romainremyb/events{/privacy}",
"received_events_url": "https://api.github.com/users/romainremyb/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ",
"I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?"
] | 1,655,326,042,000
| 1,655,387,647,000
| 1,655,387,647,000
|
NONE
| null | null |
## Describe the bug
A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets.
## Steps to reproduce the bug
Steps are:
- load whatever datset
- write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification
- map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features
# Sample code to reproduce the bug
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length")
labels = []
for i, label in enumerate(examples[f"labels"]):
word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word.
previous_word_idx = None
label_ids = []
for word_idx in word_ids: # Set the special tokens to -100.
if word_idx is None:
label_ids.append(-100)
elif word_idx != previous_word_idx: # Only label the first token of a given word.
label_ids.append(label[word_idx])
else:
label_ids.append(-100)
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
dt = dataset.map(tokenize_and_align_labels, batched=True)
## Expected results
New dataset objects should load and do on older versions.
## Actual results
"ValueError: Class label -100 less than -1" from cast_storage method from datasets.features
## Environment info
everything works fine on older installations of datasets/transformers
Issue arises when installing datasets on google collab under python3.7
I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4508/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4507
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4507/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4507/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4507/events
|
https://github.com/huggingface/datasets/issues/4507
| 1,272,615,932
|
I_kwDODunzps5L2pP8
| 4,507
|
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
|
{
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] |
[
"Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.",
"@albertvillanova Thanks! I can't believe I didn't know this feature till now."
] | 1,655,319,394,000
| 1,655,376,008,000
| 1,655,376,008,000
|
NONE
| null | null |
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`?
Many thanks for any help.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4507/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4506/events
|
https://github.com/huggingface/datasets/issues/4506
| 1,272,516,895
|
I_kwDODunzps5L2REf
| 4,506
|
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
|
{
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`",
"@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake",
"Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```",
"installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment",
"This has been fixed in https://github.com/huggingface/datasets/pull/4516, we will do a new release soon to include the fix :)"
] | 1,655,313,091,000
| 1,676,517,272,000
| 1,656,422,585,000
|
NONE
| null | null |
## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
Whilst the function looks like this:
```python
@staticmethod
def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example):
speaker_id, dialogue = tuple(zip(*(example["dialogue"])))
example["speaker_id"] = speaker_id
example["dialogue"] = dialogue
return example
```
This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step.
This error is sometimes causing a failure to use cached data, instead of re-running all steps again.
## Steps to reproduce the bug
```python
import copy
import datasets
from datasets import arrow_dataset
def main():
dataset = datasets.load_dataset("blended_skill_talk")
res = dataset.map(method)
print(res)
def method(example: arrow_dataset.Example):
example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance'])
return example
if __name__ == '__main__':
main()
```
Run with:
```
python -m reproduce_error
```
## Expected results
Dataset is mapped and cached correctly.
## Actual results
The code outputs this at some point:
`Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 20.04.3
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Datasets version: 2.3.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4506/timeline
|
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4504/events
|
https://github.com/huggingface/datasets/issues/4504
| 1,272,418,480
|
I_kwDODunzps5L15Cw
| 4,504
|
Can you please add the Stanford dog dataset?
|
{
"login": "dgrnd4",
"id": 69434832,
"node_id": "MDQ6VXNlcjY5NDM0ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgrnd4",
"html_url": "https://github.com/dgrnd4",
"followers_url": "https://api.github.com/users/dgrnd4/followers",
"following_url": "https://api.github.com/users/dgrnd4/following{/other_user}",
"gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions",
"organizations_url": "https://api.github.com/users/dgrnd4/orgs",
"repos_url": "https://api.github.com/users/dgrnd4/repos",
"events_url": "https://api.github.com/users/dgrnd4/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgrnd4/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false
|
{
"login": "khushmeeet",
"id": 8711912,
"node_id": "MDQ6VXNlcjg3MTE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khushmeeet",
"html_url": "https://github.com/khushmeeet",
"followers_url": "https://api.github.com/users/khushmeeet/followers",
"following_url": "https://api.github.com/users/khushmeeet/following{/other_user}",
"gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions",
"organizations_url": "https://api.github.com/users/khushmeeet/orgs",
"repos_url": "https://api.github.com/users/khushmeeet/repos",
"events_url": "https://api.github.com/users/khushmeeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/khushmeeet/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "khushmeeet",
"id": 8711912,
"node_id": "MDQ6VXNlcjg3MTE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khushmeeet",
"html_url": "https://github.com/khushmeeet",
"followers_url": "https://api.github.com/users/khushmeeet/followers",
"following_url": "https://api.github.com/users/khushmeeet/following{/other_user}",
"gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions",
"organizations_url": "https://api.github.com/users/khushmeeet/orgs",
"repos_url": "https://api.github.com/users/khushmeeet/repos",
"events_url": "https://api.github.com/users/khushmeeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/khushmeeet/received_events",
"type": "User",
"site_admin": false
}
] |
[
"would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)",
"@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n",
"Hi! The [ADD NEW DATASET](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers.",
"If no one is working on this, I could take this up!",
"@khushmeeet this is the [link](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset) where I added the dataset already. If you can I would ask you to do this:\r\n1) The dataset it's all in TRAINING SET: can you please divide it in Training,Test and Validation Set? If you can for each class, take the 80% for the Training set and the 10% for Test and 10% Validation\r\n2) The images has different size, can you please resize all the images in 224,224,3? Look even at the last dimension \"3\" because some images has dimension 4!\r\n\r\nThank you!!",
"Hi @khushmeeet! Thanks for the interest. You can self-assign the issue by commenting `#self-assign` on it. \r\n\r\nAlso, I think we can skip @dgrnd4's steps as we try to avoid any custom processing on top of raw data. One can later copy the script and override `_post_process` in it to perform such processing on the generated dataset.",
"Thanks @mariosasko \r\n\r\n@dgrnd4 As dataset is there on Hub, and preprocessing is not recommended. I am not sure if there is any other task to do. However, I can't seem to find relevant `.py` files for this dataset in GitHub repo.",
"@khushmeeet @mariosasko The point is that the images must be processed and must have the same size in order to can be used for things for example \"Training\". ",
"@dgrnd4 Yes, but this can be done after loading (`map` to resize images and `train_test_split` to create extra splits)\r\n\r\n@khushmeeet The linked version is implemented as a no-code dataset and is generated directly from the ZIP archive, but our \"GitHub\" datasets (these are datasets without a user/org namespace on the Hub) need a generation script, and you can find one [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/image_classification/stanford_dogs.py). `datasets` started as a fork of TFDS, so we share similar script structure, which makes it trivial to adapt it.",
"@mariosasko The point is that if I use something like this:\r\nx_train, x_test = train_test_split(dataset, test_size=0.1) \r\n\r\nto get Train 90% and Test 10%, and then to get the Validation Set (10% of the whole 100%):\r\n\r\n```\r\ntrain_ratio = 0.80\r\nvalidation_ratio = 0.10\r\ntest_ratio = 0.10\r\n\r\nx_train, x_test, y_train, y_test = train_test_split(dataX, dataY, test_size=1 - train_ratio)\r\nx_val, x_test, y_val, y_test = train_test_split(x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio)) \r\n\r\n```\r\n\r\nThe point is that the structure of the data is:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20580\r\n })\r\n})\r\n\r\n```\r\n\r\nSo how to extract images and labels?\r\n\r\nEDIT --> Split of the dataset in Train-Test-Validation:\r\n```\r\nimport datasets\r\nfrom datasets.dataset_dict import DatasetDict\r\nfrom datasets import Dataset\r\n\r\npercentage_divison_test = int(len(dataset['train'])/100 *10) # 10% --> 2058 \r\npercentage_divison_validation = int(len(dataset['train'])/100 *20) # 20% --> 4116\r\n\r\ndataset_ = datasets.DatasetDict({\"train\": Dataset.from_dict({\r\n\r\n 'image': dataset['train'][0 : len(dataset['train']) ]['image'], \r\n 'labels': dataset['train'][0 : len(dataset['train']) ]['label'] }), \r\n \r\n \"test\": Dataset.from_dict({ #20580-4116 (validation) ,20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['label'] }), \r\n \r\n \"validation\": Dataset.from_dict({ # 20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['label'] }), \r\n })\r\n```",
"@mariosasko in order to resize images I'm trying this method: \r\n```\r\nfor i in range(0,len(dataset['train'])): #len(dataset['train'])\r\n\r\n ex = dataset['train'][i] #i\r\n image = ex['image']\r\n image = image.convert(\"RGB\") # <class 'PIL.Image.Image'> <PIL.Image.Image image mode=RGB size=500x333 at 0x7F84F1948150>\r\n image_resized = image.resize(size_to_resize) # <PIL.Image.Image image mode=RGB size=224x224 at 0x7F84F17885D0>\r\n\r\n dataset['train'][i]['image'] = image_resized \r\n```\r\n\r\nBecause the DatasetDict is formed by arrows that are immutable, the changing assignment in the last line of code, doesn't work!\r\nDo you have any idea in order to get a valid result?",
"#self-assign",
"I have raised PR for adding stanford-dog dataset. I have not added any data preprocessing code. Only dataset generation script is there. Let me know any changes required, or anything to add to README."
] | 1,655,307,575,000
| 1,657,342,067,000
| null |
NONE
| null | null |
## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)*
- **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose *
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4504/timeline
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4502
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4502/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4502/events
|
https://github.com/huggingface/datasets/issues/4502
| 1,272,353,700
|
I_kwDODunzps5L1pOk
| 4,502
|
Logic bug in arrow_writer?
|
{
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.",
"Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.",
"> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.",
"Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.",
"Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```",
"Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.",
"> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`",
"Great thanks for the response! So I'll just add that regression test and remove the current if-statement.",
"Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```",
"> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema"
] | 1,655,304,600,000
| 1,655,565,351,000
| 1,655,565,351,000
|
CONTRIBUTOR
| null | null |
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values()))) == 0:
+ if not batch_examples or len(next(iter(batch_examples.values()))) == 0:
return
```
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4502/timeline
|
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.