url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1588
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1588/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1588/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1588/events
|
https://github.com/huggingface/datasets/pull/1588
| 769,068,227
|
MDExOlB1bGxSZXF1ZXN0NTQxMjg3OTcz
| 1,588
|
Modified hind encorp
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
}
|
[] |
closed
| false
| null |
[] | null |
[
"welcome, awesome "
] | 2020-12-16T16:28:14Z
| 2020-12-16T22:41:53Z
| 2020-12-16T17:20:28Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1588",
"merged_at": "2020-12-16T17:20:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1588"
}
|
description added, unnecessary comments removed from .py and readme.md reformated
@lhoestq for #1584
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1588/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1588/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1524
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1524/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1524/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1524/events
|
https://github.com/huggingface/datasets/pull/1524
| 764,521,672
|
MDExOlB1bGxSZXF1ZXN0NTM4NTQ2MjI0
| 1,524
|
ADD: swahili dataset for language modeling
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29649801?v=4",
"events_url": "https://api.github.com/users/akshayb7/events{/privacy}",
"followers_url": "https://api.github.com/users/akshayb7/followers",
"following_url": "https://api.github.com/users/akshayb7/following{/other_user}",
"gists_url": "https://api.github.com/users/akshayb7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akshayb7",
"id": 29649801,
"login": "akshayb7",
"node_id": "MDQ6VXNlcjI5NjQ5ODAx",
"organizations_url": "https://api.github.com/users/akshayb7/orgs",
"received_events_url": "https://api.github.com/users/akshayb7/received_events",
"repos_url": "https://api.github.com/users/akshayb7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akshayb7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshayb7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akshayb7"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-12T22:47:18Z
| 2020-12-17T16:37:16Z
| 2020-12-17T16:37:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1524.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1524",
"merged_at": "2020-12-17T16:37:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1524.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1524"
}
|
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1524/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1524/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5555/events
|
https://github.com/huggingface/datasets/issues/5555
| 1,592,469,938
|
I_kwDODunzps5e6ymy
| 5,555
|
`.shuffle` throwing error `ValueError: Protocol not known: parent`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"events_url": "https://api.github.com/users/prabhakar267/events{/privacy}",
"followers_url": "https://api.github.com/users/prabhakar267/followers",
"following_url": "https://api.github.com/users/prabhakar267/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prabhakar267",
"id": 10768588,
"login": "prabhakar267",
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"organizations_url": "https://api.github.com/users/prabhakar267/orgs",
"received_events_url": "https://api.github.com/users/prabhakar267/received_events",
"repos_url": "https://api.github.com/users/prabhakar267/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prabhakar267"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```",
"```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese are the actual paths where `.hf` files are stored. ",
"I'm not aware of any `.hf` file ? What are you referring to ?\r\n\r\nAlso the error says \"Protocol unknown: parent\". Is there a chance you may have ended up with a path that contains this string `parent://` ?",
"I figured out why the issue was occuring but don't know the long-term fix.\r\nThe dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.\r\nQuick fix is to not use colons in filename. But if this is expected behaviour, this should be clearly stated in the documentation.\r\nThanks for help @lhoestq "
] | 2023-02-20T21:33:45Z
| 2023-02-27T09:23:34Z
| null |
NONE
| null | null | null |
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint)
3610 return self._new_dataset_with_indices(
3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name
3612 )
3614 permutation = generator.permutation(len(self))
-> 3616 return self.select(
3617 indices=permutation,
3618 keep_in_memory=keep_in_memory,
3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None,
3620 writer_batch_size=writer_batch_size,
3621 new_fingerprint=new_fingerprint,
3622 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
3265 # If not contiguous, we need to create a new indices mapping
-> 3266 return self._select_with_indices_mapping(
3267 indices,
3268 keep_in_memory=keep_in_memory,
3269 indices_cache_file_name=indices_cache_file_name,
3270 writer_batch_size=writer_batch_size,
3271 new_fingerprint=new_fingerprint,
3272 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}")
3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
-> 3389 writer = ArrowWriter(
3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices"
3391 )
3393 indices = indices if isinstance(indices, list) else list(indices)
3395 size = len(self)
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options)
312 self._disable_nullable = disable_nullable
314 if stream is None:
--> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options)
316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0]
317 self._path = (
318 fs_token_paths[2][0]
319 if not is_remote_filesystem(self._fs)
320 else self._fs.unstrip_protocol(fs_token_paths[2][0])
321 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand)
591 else:
592 urlpath = stringify_path(urlpath)
--> 593 chain = _un_chain(urlpath, storage_options or {})
594 if len(chain) > 1:
595 inkwargs = {}
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs)
328 for bit in reversed(bits):
329 protocol = split_protocol(bit)[0] or "file"
--> 330 cls = get_filesystem_class(protocol)
331 extra_kwargs = cls._get_kwargs_from_urls(bit)
332 kws = kwargs.get(protocol, {})
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol)
238 if protocol not in registry:
239 if protocol not in known_implementations:
--> 240 raise ValueError("Protocol not known: %s" % protocol)
241 bit = known_implementations[protocol]
242 try:
ValueError: Protocol not known: parent
```
This is what the `train_dataset` object looks like
```
Dataset({
features: ['label', 'input_ids', 'attention_mask'],
num_rows: 364166
})
```
### Steps to reproduce the bug
The `train_dataset` obj is created by concatenating two datasets
And then shuffle is called, but it throws the mentioned error.
### Expected behavior
Should shuffle the dataset properly.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 10.0.0
- Pandas version: 1.4.4
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1369
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1369/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1369/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1369/events
|
https://github.com/huggingface/datasets/pull/1369
| 760,227,776
|
MDExOlB1bGxSZXF1ZXN0NTM1MDk0NDk1
| 1,369
|
Use passed --cache_dir for modules cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
open
| false
| null |
[] | null |
[
"I have a question: why not using a tmp dir instead, like the DummyDataGeneratorDownloadManager does?",
"Hi @lhoestq, I am trying to understand better the logic...\r\n\r\nWhy do we have a `dynamic_module_path` besides the modules cache path?\r\n```python\r\nDYNAMIC_MODULES_PATH = os.path.join(HF_MODULES_CACHE, \"datasets_modules\")\r\n```\r\nMoreover, 2 subdirectories (for datasets and for metrics) were created inside it:\r\n```python\r\nDATASETS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"datasets\")\r\nMETRICS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"metrics\")\r\n```",
"Hi :) \r\nThe modules cache path is the path added to `sys.path`.\r\nTherefore inside we need to have a folder that is going to be a package: `datasets_modules`.\r\nThis package will contain dynamic modules, i.e. datasets and metrics modules added on-the-fly.\r\nThen we have two sub-modules `datasets_modules.datasets` and `datasets_modules.metrics`.\r\n\r\nMaybe we can make things more explicit in the code with some comments explaining the structure, and maybe better variable naming as well..\r\n\r\nAlso I wanted to say that I started to work on offline loading of modules in #1726 and actually it lead to do similar changes to what you did to control the path where modules are stored.",
"Hi @lhoestq, I see...\r\n\r\nIndeed I was also creating a draft for test_load, to clarify the expected behavior... ;)\r\n\r\nSo, for the command line:\r\n```sh\r\npython datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n```\r\nthe `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?",
"> So, for the command line:\r\n> \r\n> ```shell\r\n> python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n> ```\r\n> \r\n> the `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?\r\n\r\nYes the cache_dir is used to download files and also so save the dataset arrow files.\r\nThis is indeed different from the path for dynamic modules.\r\n\r\nI suggested to have `dynamic_module_path` as a parameter but actually this is the parent directory `hf_modules_cache` that we would need (it's the one that is passed to `init_dynamic_modules ` that we need to add to `sys.path`).\r\n\r\nCurrently it's already possible to override it using the env variable `HF_MODULES_CACHE` but we can imagine having it as a parameter as well.\r\n\r\nThis way the user controls both the `cache_dir` and the `hf_modules_cache` which are the two places used by the library to read/write stuff.\r\n\r\n",
"I think #1726 is going to be merged pretty soon. Maybe can work on this as soon as it's merged to avoid doing the same things twice and to avoid conflicts ?",
"I agree. Indeed I took some of your code in one of my last commit, to try to implement the logic you described."
] | 2020-12-09T10:59:59Z
| 2022-07-06T15:19:47Z
| null |
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1369",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1369"
}
|
When passed `--cache_dir` arg:
```shell
python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>
```
it is not used for caching the modules, which are cached in the default location at `.cache/huggingface/modules`.
With this fix, the modules will be cached at `<my-cache-dir>/modules`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1369/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1369/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5105
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5105/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5105/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5105/events
|
https://github.com/huggingface/datasets/issues/5105
| 1,406,078,357
|
I_kwDODunzps5Tzw2V
| 5,105
|
Specifying an exisiting folder in download_and_prepare deletes everything in it
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"cc @lhoestq ",
"Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...",
"`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar",
"Thank you both for your feedback!\r\n\r\n@albertvillanova I think I might have have the wrong mental model of what the function was meant to do. I thought it would be an API similar to the pandas `to_XX` write methods (Like the one @lhoestq mentions) so I just assumed it would download the dataframe to whichever folder I specififed (`\"./\"` in my case) so I could load it into a dask dataframe. I absolutely did not expect it to delete everything in my local directory, including the script where I called it from :smile: \r\n\r\nI think Quentin's proposed solution sounds like a reasonable feature!",
"actually there's already a `download_mode` parameter that defaults to `REUSE_DATASET_IF_EXISTS` - so I guess it's just a matter of not deleting files unrelated to the dataset, and to overwrite existing dataset files if the download mode is `REUSE_CACHE_IF_EXISTS` or `FORCE_REDOWNLOAD`"
] | 2022-10-12T11:53:33Z
| 2022-10-20T11:53:59Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet")
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback)
122 if type is None:
123 try:
--> 124 next(self.gen)
125 except StopIteration:
126 return False
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname)
File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror)
720 os.rmdir(path)
721 except OSError:
--> 722 onerror(os.rmdir, path, sys.exc_info())
723 else:
724 try:
725 # symlinks to directories are forbidden, see bug #1669
File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror)
718 _rmtree_safe_fd(fd, path, onerror)
719 try:
--> 720 os.rmdir(path)
721 except OSError:
722 onerror(os.rmdir, path, sys.exc_info())
OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.'
```
## Steps to reproduce the bug
```python
rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes")
rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet")
```
If `test_folder` contains any files they will all be deleted
## Expected results
Either a warning that all files will be deleted, but preferably that they not be deleted at all.
## Actual results
N/A
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5105/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5105/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4308
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4308/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4308/events
|
https://github.com/huggingface/datasets/pull/4308
| 1,231,217,783
|
PR_kwDODunzps43lHdP
| 4,308
|
Remove unused multiprocessing args from test CLI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T14:02:15Z
| 2022-05-11T12:58:25Z
| 2022-05-11T12:50:43Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4308.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4308",
"merged_at": "2022-05-11T12:50:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4308.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4308"
}
|
Multiprocessing is not used in the test CLI.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4308/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3921/events
|
https://github.com/huggingface/datasets/pull/3921
| 1,169,749,338
|
PR_kwDODunzps40d4Mk
| 3,921
|
Fix NonMatchingChecksumError in CRD3 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.",
"Unrelated test failure. This PR can be merged."
] | 2022-03-15T14:27:14Z
| 2022-03-15T15:54:27Z
| 2022-03-15T15:54:26Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3921",
"merged_at": "2022-03-15T15:54:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3921"
}
|
Fix #3051
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3921/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5592
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5592/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5592/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5592/events
|
https://github.com/huggingface/datasets/pull/5592
| 1,603,619,124
|
PR_kwDODunzps5K9dWr
| 5,592
|
Fix docstring example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009526 / 0.011353 (-0.001827) | 0.005132 / 0.011008 (-0.005876) | 0.101312 / 0.038508 (0.062804) | 0.035703 / 0.023109 (0.012594) | 0.301788 / 0.275898 (0.025890) | 0.368411 / 0.323480 (0.044932) | 0.008163 / 0.007986 (0.000177) | 0.005462 / 0.004328 (0.001134) | 0.077282 / 0.004250 (0.073031) | 0.044139 / 0.037052 (0.007086) | 0.312280 / 0.258489 (0.053791) | 0.351870 / 0.293841 (0.058029) | 0.038266 / 0.128546 (-0.090281) | 0.012051 / 0.075646 (-0.063595) | 0.335109 / 0.419271 (-0.084163) | 0.047596 / 0.043533 (0.004064) | 0.300931 / 0.255139 (0.045792) | 0.325705 / 0.283200 (0.042505) | 0.100472 / 0.141683 (-0.041211) | 1.475037 / 1.452155 (0.022882) | 1.520059 / 1.492716 (0.027343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211096 / 0.018006 (0.193089) | 0.442988 / 0.000490 (0.442498) | 0.003644 / 0.000200 (0.003444) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027492 / 0.037411 (-0.009919) | 0.108981 / 0.014526 (0.094455) | 0.117836 / 0.176557 (-0.058720) | 0.161220 / 0.737135 (-0.575915) | 0.124765 / 0.296338 (-0.171574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413480 / 0.215209 (0.198271) | 4.111355 / 2.077655 (2.033700) | 1.933024 / 1.504120 (0.428904) | 1.727467 / 1.541195 (0.186272) | 1.827106 / 1.468490 (0.358616) | 0.688209 / 4.584777 (-3.896568) | 3.759672 / 3.745712 (0.013960) | 2.163806 / 5.269862 (-3.106056) | 1.473521 / 4.565676 (-3.092155) | 0.082859 / 0.424275 (-0.341416) | 0.012320 / 0.007607 (0.004713) | 0.515321 / 0.226044 (0.289277) | 5.158651 / 2.268929 (2.889722) | 2.489123 / 55.444624 (-52.955501) | 2.218910 / 6.876477 (-4.657566) | 2.257306 / 2.142072 (0.115233) | 0.861477 / 4.805227 (-3.943750) | 0.165857 / 6.500664 (-6.334807) | 0.063723 / 0.075469 (-0.011746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195163 / 1.841788 (-0.646625) | 14.954518 / 8.074308 (6.880210) | 14.272289 / 10.191392 (4.080897) | 0.167420 / 0.680424 (-0.513004) | 0.028907 / 0.534201 (-0.505294) | 0.450117 / 0.579283 (-0.129166) | 0.448532 / 0.434364 (0.014168) | 0.534406 / 0.540337 (-0.005931) | 0.633468 / 1.386936 (-0.753468) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003694) | 0.005266 / 0.011008 (-0.005742) | 0.075293 / 0.038508 (0.036785) | 0.034442 / 0.023109 (0.011333) | 0.346558 / 0.275898 (0.070660) | 0.391496 / 0.323480 (0.068017) | 0.005852 / 0.007986 (-0.002133) | 0.004121 / 0.004328 (-0.000207) | 0.074254 / 0.004250 (0.070004) | 0.048361 / 0.037052 (0.011309) | 0.344613 / 0.258489 (0.086124) | 0.401497 / 0.293841 (0.107656) | 0.037243 / 0.128546 (-0.091303) | 0.012505 / 0.075646 (-0.063142) | 0.087188 / 0.419271 (-0.332084) | 0.050114 / 0.043533 (0.006581) | 0.340454 / 0.255139 (0.085315) | 0.361087 / 0.283200 (0.077887) | 0.104692 / 0.141683 (-0.036991) | 1.419432 / 1.452155 (-0.032722) | 1.524709 / 1.492716 (0.031993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231820 / 0.018006 (0.213814) | 0.445791 / 0.000490 (0.445301) | 0.000442 / 0.000200 (0.000242) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030445 / 0.037411 (-0.006967) | 0.111183 / 0.014526 (0.096657) | 0.123494 / 0.176557 (-0.053063) | 0.173121 / 0.737135 (-0.564014) | 0.124968 / 0.296338 (-0.171371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428854 / 0.215209 (0.213645) | 4.270262 / 2.077655 (2.192608) | 2.012075 / 1.504120 (0.507955) | 1.826564 / 1.541195 (0.285370) | 1.931699 / 1.468490 (0.463209) | 0.728762 / 4.584777 (-3.856015) | 3.879640 / 3.745712 (0.133928) | 3.325715 / 5.269862 (-1.944147) | 1.818573 / 4.565676 (-2.747104) | 0.087879 / 0.424275 (-0.336396) | 0.012530 / 0.007607 (0.004923) | 0.530249 / 0.226044 (0.304204) | 5.286110 / 2.268929 (3.017181) | 2.566649 / 55.444624 (-52.877975) | 2.210162 / 6.876477 (-4.666315) | 2.297562 / 2.142072 (0.155490) | 0.906161 / 4.805227 (-3.899066) | 0.171914 / 6.500664 (-6.328750) | 0.064182 / 0.075469 (-0.011287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285781 / 1.841788 (-0.556006) | 16.159072 / 8.074308 (8.084763) | 14.087492 / 10.191392 (3.896100) | 0.148789 / 0.680424 (-0.531635) | 0.018078 / 0.534201 (-0.516123) | 0.427748 / 0.579283 (-0.151535) | 0.447079 / 0.434364 (0.012715) | 0.535917 / 0.540337 (-0.004421) | 0.627491 / 1.386936 (-0.759445) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-28T18:42:37Z
| 2023-02-28T19:26:33Z
| 2023-02-28T19:19:15Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5592",
"merged_at": "2023-02-28T19:19:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5592"
}
|
Fixes #5581 to use the correct output for the `set_format` method.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5592/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5592/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/512
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/512/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/512/comments
|
https://api.github.com/repos/huggingface/datasets/issues/512/events
|
https://github.com/huggingface/datasets/pull/512
| 681,137,164
|
MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3
| 512
|
Delete CONTRIBUTING.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4",
"events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenZehong13/followers",
"following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenZehong13",
"id": 56394989,
"login": "ChenZehong13",
"node_id": "MDQ6VXNlcjU2Mzk0OTg5",
"organizations_url": "https://api.github.com/users/ChenZehong13/orgs",
"received_events_url": "https://api.github.com/users/ChenZehong13/received_events",
"repos_url": "https://api.github.com/users/ChenZehong13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenZehong13"
}
|
[] |
closed
| false
| null |
[] | null |
[
"😱",
"Yeah, this is spammy behavior. I've reported the user handle."
] | 2020-08-18T15:33:25Z
| 2020-08-18T15:48:21Z
| 2020-08-18T15:39:07Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/512",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/512"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/512/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/512/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/5134
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5134/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5134/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5134/events
|
https://github.com/huggingface/datasets/issues/5134
| 1,413,623,687
|
I_kwDODunzps5UQi-H
| 5,134
|
Raise ImportError instead of OSError if required extraction library is not installed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushthe1",
"id": 114604338,
"login": "ayushthe1",
"node_id": "U_kgDOBtS5Mg",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushthe1"
}
] | null |
[
"hey ,i would like to work on this issue . Please assign it to me.",
"hey @mariosasko , i made a pr for this issue. Could you please review it.\r\nAlso i found multiple `OSError` in `extract.py` file which i thought could be replaced too but wasn't sure about them.\r\nPlease do tell if that also needs to be done."
] | 2022-10-18T17:53:46Z
| 2022-10-25T15:56:59Z
| 2022-10-25T15:56:59Z
|
CONTRIBUTOR
| null | null | null |
According to the official Python docs, `OSError` should be thrown in the following situations:
> This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors).
Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5134/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5134/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2479
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2479/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2479/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2479/events
|
https://github.com/huggingface/datasets/pull/2479
| 918,672,431
|
MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4
| 2,479
|
❌ load_datasets ❌
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-06-11T12:14:36Z
| 2021-06-11T14:46:25Z
| 2021-06-11T14:46:25Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2479.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2479",
"merged_at": "2021-06-11T14:46:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2479.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2479"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2479/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2479/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/774
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/774/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/774/comments
|
https://api.github.com/repos/huggingface/datasets/issues/774/events
|
https://github.com/huggingface/datasets/pull/774
| 732,265,741
|
MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0
| 774
|
[ROUGE] Add description to Rouge metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-10-29T12:19:32Z
| 2020-10-29T17:55:50Z
| 2020-10-29T17:55:48Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/774",
"merged_at": "2020-10-29T17:55:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/774"
}
|
Add information about case sensitivity to ROUGE.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/774/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/774/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3139
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3139/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3139/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3139/events
|
https://github.com/huggingface/datasets/issues/3139
| 1,033,524,079
|
I_kwDODunzps49mlNv
| 3,139
|
Fix file/directory deletion on Windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[] | 2021-10-22T12:22:08Z
| 2021-10-22T12:22:08Z
| null |
CONTRIBUTOR
| null | null | null |
Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`.
Examples:
- download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset = load_dataset("sst", split="train", download_mode="force_redownload")
```
- try to clean up the cache files while keeping a reference to those files (via the mapped dataset):
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset_mapped = dset.map(lambda _: {"dummy_col": 1})
dset.cleanup_cache_files()
```
We should fix those.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3139/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3139/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/797
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/797/comments
|
https://api.github.com/repos/huggingface/datasets/issues/797/events
|
https://github.com/huggingface/datasets/issues/797
| 735,420,332
|
MDU6SXNzdWU3MzU0MjAzMzI=
| 797
|
Token classification labels are strings and we don't have the list of labels
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] |
closed
| false
| null |
[] | null |
[
"Indeed. Pinging @stefan-it here if he want to give an expert opinion :)",
"Related is https://github.com/huggingface/datasets/pull/636",
"Should definitely be a ClassLabel 👍 ",
"Already done."
] | 2020-11-03T15:33:30Z
| 2022-02-14T15:41:54Z
| 2022-02-14T15:41:53Z
|
CONTRIBUTOR
| null | null | null |
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/797/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/858
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/858/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/858/comments
|
https://api.github.com/repos/huggingface/datasets/issues/858/events
|
https://github.com/huggingface/datasets/pull/858
| 743,904,516
|
MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4
| 858
|
Add SemEval-2010 task 8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoelNiklaus",
"id": 3775944,
"login": "JoelNiklaus",
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoelNiklaus"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Added dummy data and encoding to open(). Now everything should be fine, hopefully :)"
] | 2020-11-16T14:57:57Z
| 2020-11-26T17:28:55Z
| 2020-11-26T17:28:55Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/858",
"merged_at": "2020-11-26T17:28:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/858"
}
|
Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/858/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/858/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1751/events
|
https://github.com/huggingface/datasets/pull/1751
| 789,232,980
|
MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2
| 1,751
|
Updated README for the Social Bias Frames dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-01-19T17:53:00Z
| 2021-01-20T14:56:52Z
| 2021-01-20T14:56:52Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"merged_at": "2021-01-20T14:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1751"
}
|
See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1751/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2554/events
|
https://github.com/huggingface/datasets/issues/2554
| 931,453,855
|
MDU6SXNzdWU5MzE0NTM4NTU=
| 2,554
|
Multilabel metrics not supported
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GuillemGSubies",
"id": 37592763,
"login": "GuillemGSubies",
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GuillemGSubies"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.",
"Looks nice, thank you very much! 🚀 ",
"Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enough for multilabel problems:\r\n\r\nhttps://github.com/huggingface/datasets/blob/92a3ee549705aa0a107c9fa5caf463b3b3da2616/metrics/f1/f1.py#L115\r\n\r\nSomehow we should be able to change the parameter `average` at least",
"@GuillemGSubies, the parameter `average` passed to `_compute` is then passed to `f1_score`. This is right."
] | 2021-06-28T11:09:46Z
| 2021-10-13T12:29:13Z
| 2021-07-08T08:40:15Z
|
NONE
| null | null | null |
When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274
And looks like this is because here
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88
the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work:
```python
class F1(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": datasets.Sequence(datasets.Value("int32")),
"references": datasets.Sequence(datasets.Value("int32")),
}
),
reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"],
)
def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None):
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
),
}
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2554/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2554/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5389
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5389/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5389/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5389/events
|
https://github.com/huggingface/datasets/pull/5389
| 1,509,348,626
|
PR_kwDODunzps5GHsOo
| 5,389
|
Fix link in `load_dataset` docstring
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008935 / 0.011353 (-0.002417) | 0.004582 / 0.011008 (-0.006426) | 0.100950 / 0.038508 (0.062442) | 0.030305 / 0.023109 (0.007196) | 0.299759 / 0.275898 (0.023861) | 0.378577 / 0.323480 (0.055097) | 0.007834 / 0.007986 (-0.000152) | 0.003399 / 0.004328 (-0.000930) | 0.078568 / 0.004250 (0.074318) | 0.037990 / 0.037052 (0.000938) | 0.313025 / 0.258489 (0.054536) | 0.359543 / 0.293841 (0.065702) | 0.033631 / 0.128546 (-0.094916) | 0.011681 / 0.075646 (-0.063966) | 0.324542 / 0.419271 (-0.094729) | 0.041014 / 0.043533 (-0.002519) | 0.302884 / 0.255139 (0.047745) | 0.337059 / 0.283200 (0.053859) | 0.089403 / 0.141683 (-0.052280) | 1.491262 / 1.452155 (0.039108) | 1.521626 / 1.492716 (0.028910) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.172627 / 0.018006 (0.154621) | 0.419406 / 0.000490 (0.418917) | 0.001974 / 0.000200 (0.001775) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023598 / 0.037411 (-0.013814) | 0.098127 / 0.014526 (0.083601) | 0.105611 / 0.176557 (-0.070946) | 0.142612 / 0.737135 (-0.594523) | 0.121687 / 0.296338 (-0.174651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418512 / 0.215209 (0.203303) | 4.173099 / 2.077655 (2.095444) | 1.865900 / 1.504120 (0.361780) | 1.664053 / 1.541195 (0.122858) | 1.726289 / 1.468490 (0.257799) | 0.693214 / 4.584777 (-3.891563) | 3.499982 / 3.745712 (-0.245730) | 1.894278 / 5.269862 (-3.375583) | 1.178214 / 4.565676 (-3.387463) | 0.082391 / 0.424275 (-0.341884) | 0.012486 / 0.007607 (0.004878) | 0.532190 / 0.226044 (0.306145) | 5.286612 / 2.268929 (3.017684) | 2.316680 / 55.444624 (-53.127944) | 1.964020 / 6.876477 (-4.912457) | 2.016457 / 2.142072 (-0.125616) | 0.812290 / 4.805227 (-3.992937) | 0.149102 / 6.500664 (-6.351562) | 0.064215 / 0.075469 (-0.011254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281919 / 1.841788 (-0.559869) | 14.107509 / 8.074308 (6.033201) | 13.892369 / 10.191392 (3.700977) | 0.146164 / 0.680424 (-0.534260) | 0.028740 / 0.534201 (-0.505460) | 0.395218 / 0.579283 (-0.184066) | 0.406321 / 0.434364 (-0.028043) | 0.460880 / 0.540337 (-0.079458) | 0.545975 / 1.386936 (-0.840961) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006797 / 0.011353 (-0.004556) | 0.004522 / 0.011008 (-0.006486) | 0.098440 / 0.038508 (0.059932) | 0.027722 / 0.023109 (0.004613) | 0.423995 / 0.275898 (0.148097) | 0.456164 / 0.323480 (0.132684) | 0.005156 / 0.007986 (-0.002830) | 0.003439 / 0.004328 (-0.000889) | 0.075307 / 0.004250 (0.071057) | 0.039599 / 0.037052 (0.002547) | 0.423671 / 0.258489 (0.165181) | 0.463841 / 0.293841 (0.170001) | 0.032473 / 0.128546 (-0.096073) | 0.011674 / 0.075646 (-0.063972) | 0.320548 / 0.419271 (-0.098723) | 0.041618 / 0.043533 (-0.001915) | 0.426133 / 0.255139 (0.170994) | 0.443018 / 0.283200 (0.159819) | 0.091103 / 0.141683 (-0.050579) | 1.468758 / 1.452155 (0.016604) | 1.532695 / 1.492716 (0.039978) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255314 / 0.018006 (0.237308) | 0.422982 / 0.000490 (0.422492) | 0.015405 / 0.000200 (0.015205) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025260 / 0.037411 (-0.012152) | 0.102062 / 0.014526 (0.087537) | 0.108161 / 0.176557 (-0.068395) | 0.144205 / 0.737135 (-0.592930) | 0.111686 / 0.296338 (-0.184653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482633 / 0.215209 (0.267424) | 4.824777 / 2.077655 (2.747123) | 2.488626 / 1.504120 (0.984506) | 2.285410 / 1.541195 (0.744215) | 2.336793 / 1.468490 (0.868303) | 0.701894 / 4.584777 (-3.882883) | 3.506908 / 3.745712 (-0.238804) | 3.399789 / 5.269862 (-1.870072) | 1.536359 / 4.565676 (-3.029317) | 0.083621 / 0.424275 (-0.340655) | 0.012702 / 0.007607 (0.005094) | 0.581259 / 0.226044 (0.355215) | 5.829640 / 2.268929 (3.560711) | 2.932201 / 55.444624 (-52.512424) | 2.577175 / 6.876477 (-4.299301) | 2.621782 / 2.142072 (0.479710) | 0.812074 / 4.805227 (-3.993153) | 0.152840 / 6.500664 (-6.347824) | 0.067982 / 0.075469 (-0.007487) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274915 / 1.841788 (-0.566873) | 14.345800 / 8.074308 (6.271492) | 14.242475 / 10.191392 (4.051083) | 0.143636 / 0.680424 (-0.536788) | 0.016824 / 0.534201 (-0.517377) | 0.376449 / 0.579283 (-0.202834) | 0.394219 / 0.434364 (-0.040145) | 0.435368 / 0.540337 (-0.104969) | 0.518393 / 1.386936 (-0.868544) |\n\n</details>\n</details>\n\n\n",
"I also fixed the rest of the links that point to the markdown files. \r\n\r\nPS: the CI failures are unrelated ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.004560 / 0.011008 (-0.006448) | 0.100559 / 0.038508 (0.062051) | 0.029744 / 0.023109 (0.006635) | 0.300580 / 0.275898 (0.024682) | 0.359100 / 0.323480 (0.035620) | 0.007016 / 0.007986 (-0.000970) | 0.003393 / 0.004328 (-0.000936) | 0.078649 / 0.004250 (0.074399) | 0.038138 / 0.037052 (0.001086) | 0.307730 / 0.258489 (0.049241) | 0.347678 / 0.293841 (0.053837) | 0.033630 / 0.128546 (-0.094917) | 0.011452 / 0.075646 (-0.064194) | 0.320903 / 0.419271 (-0.098369) | 0.042659 / 0.043533 (-0.000874) | 0.298886 / 0.255139 (0.043747) | 0.324371 / 0.283200 (0.041171) | 0.092582 / 0.141683 (-0.049101) | 1.490017 / 1.452155 (0.037863) | 1.512825 / 1.492716 (0.020109) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178965 / 0.018006 (0.160958) | 0.420001 / 0.000490 (0.419512) | 0.002686 / 0.000200 (0.002486) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023568 / 0.037411 (-0.013843) | 0.097027 / 0.014526 (0.082502) | 0.104721 / 0.176557 (-0.071836) | 0.148757 / 0.737135 (-0.588378) | 0.110849 / 0.296338 (-0.185489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415034 / 0.215209 (0.199825) | 4.155249 / 2.077655 (2.077594) | 1.837027 / 1.504120 (0.332907) | 1.627754 / 1.541195 (0.086559) | 1.687958 / 1.468490 (0.219468) | 0.699542 / 4.584777 (-3.885235) | 3.376707 / 3.745712 (-0.369005) | 2.900778 / 5.269862 (-2.369083) | 1.556168 / 4.565676 (-3.009508) | 0.082438 / 0.424275 (-0.341837) | 0.012339 / 0.007607 (0.004732) | 0.524952 / 0.226044 (0.298907) | 5.269852 / 2.268929 (3.000924) | 2.278770 / 55.444624 (-53.165854) | 1.917987 / 6.876477 (-4.958490) | 1.955000 / 2.142072 (-0.187072) | 0.821169 / 4.805227 (-3.984058) | 0.149019 / 6.500664 (-6.351645) | 0.064604 / 0.075469 (-0.010865) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199768 / 1.841788 (-0.642020) | 13.760897 / 8.074308 (5.686589) | 13.911550 / 10.191392 (3.720158) | 0.161727 / 0.680424 (-0.518697) | 0.028615 / 0.534201 (-0.505586) | 0.393917 / 0.579283 (-0.185366) | 0.392524 / 0.434364 (-0.041840) | 0.451763 / 0.540337 (-0.088574) | 0.536880 / 1.386936 (-0.850056) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006407 / 0.011353 (-0.004946) | 0.004420 / 0.011008 (-0.006588) | 0.097244 / 0.038508 (0.058736) | 0.027114 / 0.023109 (0.004005) | 0.412512 / 0.275898 (0.136614) | 0.448189 / 0.323480 (0.124709) | 0.005831 / 0.007986 (-0.002155) | 0.005423 / 0.004328 (0.001095) | 0.076051 / 0.004250 (0.071801) | 0.038828 / 0.037052 (0.001776) | 0.414586 / 0.258489 (0.156097) | 0.457196 / 0.293841 (0.163355) | 0.031615 / 0.128546 (-0.096931) | 0.011542 / 0.075646 (-0.064104) | 0.316967 / 0.419271 (-0.102304) | 0.041278 / 0.043533 (-0.002254) | 0.411371 / 0.255139 (0.156232) | 0.436376 / 0.283200 (0.153177) | 0.090212 / 0.141683 (-0.051471) | 1.461831 / 1.452155 (0.009677) | 1.606515 / 1.492716 (0.113799) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221453 / 0.018006 (0.203447) | 0.404140 / 0.000490 (0.403650) | 0.000422 / 0.000200 (0.000222) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024588 / 0.037411 (-0.012824) | 0.098604 / 0.014526 (0.084078) | 0.113682 / 0.176557 (-0.062874) | 0.141141 / 0.737135 (-0.595994) | 0.110069 / 0.296338 (-0.186270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477267 / 0.215209 (0.262058) | 4.775086 / 2.077655 (2.697431) | 2.445449 / 1.504120 (0.941329) | 2.242220 / 1.541195 (0.701025) | 2.303542 / 1.468490 (0.835051) | 0.693448 / 4.584777 (-3.891329) | 3.413319 / 3.745712 (-0.332393) | 3.052734 / 5.269862 (-2.217127) | 1.434075 / 4.565676 (-3.131602) | 0.082429 / 0.424275 (-0.341846) | 0.012594 / 0.007607 (0.004987) | 0.584259 / 0.226044 (0.358214) | 5.865098 / 2.268929 (3.596169) | 2.926301 / 55.444624 (-52.518324) | 2.572555 / 6.876477 (-4.303921) | 2.608584 / 2.142072 (0.466512) | 0.805029 / 4.805227 (-4.000198) | 0.151247 / 6.500664 (-6.349417) | 0.067142 / 0.075469 (-0.008327) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285454 / 1.841788 (-0.556334) | 14.296425 / 8.074308 (6.222117) | 14.147278 / 10.191392 (3.955886) | 0.151698 / 0.680424 (-0.528726) | 0.016876 / 0.534201 (-0.517325) | 0.383302 / 0.579283 (-0.195981) | 0.388461 / 0.434364 (-0.045902) | 0.438286 / 0.540337 (-0.102051) | 0.525249 / 1.386936 (-0.861687) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008677 / 0.011353 (-0.002676) | 0.004863 / 0.011008 (-0.006145) | 0.096606 / 0.038508 (0.058098) | 0.034004 / 0.023109 (0.010895) | 0.296362 / 0.275898 (0.020464) | 0.323445 / 0.323480 (-0.000035) | 0.007341 / 0.007986 (-0.000644) | 0.005518 / 0.004328 (0.001189) | 0.073584 / 0.004250 (0.069334) | 0.041471 / 0.037052 (0.004419) | 0.302183 / 0.258489 (0.043694) | 0.339369 / 0.293841 (0.045528) | 0.037375 / 0.128546 (-0.091171) | 0.011827 / 0.075646 (-0.063819) | 0.330723 / 0.419271 (-0.088549) | 0.048751 / 0.043533 (0.005218) | 0.298370 / 0.255139 (0.043231) | 0.317781 / 0.283200 (0.034582) | 0.097488 / 0.141683 (-0.044195) | 1.456242 / 1.452155 (0.004088) | 1.530149 / 1.492716 (0.037433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207053 / 0.018006 (0.189046) | 0.438165 / 0.000490 (0.437675) | 0.001161 / 0.000200 (0.000961) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025353 / 0.037411 (-0.012059) | 0.105536 / 0.014526 (0.091010) | 0.116122 / 0.176557 (-0.060434) | 0.151605 / 0.737135 (-0.585530) | 0.121777 / 0.296338 (-0.174561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402780 / 0.215209 (0.187571) | 4.017882 / 2.077655 (1.940227) | 1.813111 / 1.504120 (0.308991) | 1.620000 / 1.541195 (0.078805) | 1.649186 / 1.468490 (0.180696) | 0.687523 / 4.584777 (-3.897254) | 3.712595 / 3.745712 (-0.033117) | 2.038535 / 5.269862 (-3.231326) | 1.414794 / 4.565676 (-3.150882) | 0.083357 / 0.424275 (-0.340918) | 0.012032 / 0.007607 (0.004425) | 0.502899 / 0.226044 (0.276854) | 5.038914 / 2.268929 (2.769985) | 2.250476 / 55.444624 (-53.194148) | 1.919954 / 6.876477 (-4.956523) | 1.930928 / 2.142072 (-0.211144) | 0.826634 / 4.805227 (-3.978593) | 0.161599 / 6.500664 (-6.339066) | 0.061356 / 0.075469 (-0.014113) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228998 / 1.841788 (-0.612790) | 14.587914 / 8.074308 (6.513606) | 14.237514 / 10.191392 (4.046122) | 0.190913 / 0.680424 (-0.489510) | 0.029104 / 0.534201 (-0.505097) | 0.436160 / 0.579283 (-0.143123) | 0.431464 / 0.434364 (-0.002900) | 0.511670 / 0.540337 (-0.028668) | 0.609046 / 1.386936 (-0.777890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006980 / 0.011353 (-0.004373) | 0.005260 / 0.011008 (-0.005748) | 0.095288 / 0.038508 (0.056780) | 0.032465 / 0.023109 (0.009356) | 0.410799 / 0.275898 (0.134901) | 0.423814 / 0.323480 (0.100334) | 0.005533 / 0.007986 (-0.002452) | 0.005764 / 0.004328 (0.001436) | 0.070713 / 0.004250 (0.066462) | 0.048193 / 0.037052 (0.011141) | 0.405742 / 0.258489 (0.147253) | 0.458773 / 0.293841 (0.164932) | 0.036415 / 0.128546 (-0.092131) | 0.012192 / 0.075646 (-0.063454) | 0.330655 / 0.419271 (-0.088617) | 0.055945 / 0.043533 (0.012412) | 0.407497 / 0.255139 (0.152358) | 0.421496 / 0.283200 (0.138296) | 0.106285 / 0.141683 (-0.035398) | 1.459837 / 1.452155 (0.007683) | 1.573147 / 1.492716 (0.080431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205776 / 0.018006 (0.187770) | 0.441523 / 0.000490 (0.441033) | 0.003073 / 0.000200 (0.002873) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029207 / 0.037411 (-0.008205) | 0.110295 / 0.014526 (0.095770) | 0.130233 / 0.176557 (-0.046324) | 0.157489 / 0.737135 (-0.579647) | 0.125374 / 0.296338 (-0.170965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440942 / 0.215209 (0.225733) | 4.389647 / 2.077655 (2.311992) | 2.234883 / 1.504120 (0.730763) | 2.029510 / 1.541195 (0.488315) | 2.082503 / 1.468490 (0.614013) | 0.698046 / 4.584777 (-3.886731) | 3.769127 / 3.745712 (0.023415) | 2.058511 / 5.269862 (-3.211351) | 1.324302 / 4.565676 (-3.241375) | 0.085695 / 0.424275 (-0.338580) | 0.012122 / 0.007607 (0.004515) | 0.552406 / 0.226044 (0.326362) | 5.527073 / 2.268929 (3.258145) | 2.711354 / 55.444624 (-52.733270) | 2.328848 / 6.876477 (-4.547629) | 2.340750 / 2.142072 (0.198678) | 0.846300 / 4.805227 (-3.958927) | 0.167465 / 6.500664 (-6.333199) | 0.063419 / 0.075469 (-0.012050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262452 / 1.841788 (-0.579336) | 15.043537 / 8.074308 (6.969229) | 14.212563 / 10.191392 (4.021171) | 0.170229 / 0.680424 (-0.510194) | 0.017696 / 0.534201 (-0.516505) | 0.423194 / 0.579283 (-0.156089) | 0.430908 / 0.434364 (-0.003456) | 0.491733 / 0.540337 (-0.048604) | 0.599267 / 1.386936 (-0.787669) |\n\n</details>\n</details>\n\n\n",
"Program enthusiastic "
] | 2022-12-23T13:26:31Z
| 2023-01-25T19:00:43Z
| 2023-01-24T16:33:38Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"merged_at": "2023-01-24T16:33:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389"
}
|
Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5389/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5389/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2195
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2195/events
|
https://github.com/huggingface/datasets/issues/2195
| 854,070,194
|
MDU6SXNzdWU4NTQwNzAxOTQ=
| 2,195
|
KeyError: '_indices_files' in `arrow_dataset.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...",
"Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"
] | 2021-04-09T01:37:12Z
| 2021-04-09T09:55:09Z
| 2021-04-09T09:54:39Z
|
NONE
| null | null | null |
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3393
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3393/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3393/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3393/events
|
https://github.com/huggingface/datasets/issues/3393
| 1,073,189,777
|
I_kwDODunzps4_95OR
| 3,393
|
Common Voice Belarusian Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42713027?v=4",
"events_url": "https://api.github.com/users/wiedymi/events{/privacy}",
"followers_url": "https://api.github.com/users/wiedymi/followers",
"following_url": "https://api.github.com/users/wiedymi/following{/other_user}",
"gists_url": "https://api.github.com/users/wiedymi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wiedymi",
"id": 42713027,
"login": "wiedymi",
"node_id": "MDQ6VXNlcjQyNzEzMDI3",
"organizations_url": "https://api.github.com/users/wiedymi/orgs",
"received_events_url": "https://api.github.com/users/wiedymi/received_events",
"repos_url": "https://api.github.com/users/wiedymi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wiedymi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wiedymi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wiedymi"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] |
open
| false
| null |
[] | null |
[] | 2021-12-07T10:37:02Z
| 2021-12-09T15:56:03Z
| null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *Common Voice Belarusian Dataset*
- **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)*
- **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)*
- **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3393/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3393/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1841
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1841/events
|
https://github.com/huggingface/datasets/issues/1841
| 803,561,123
|
MDU6SXNzdWU4MDM1NjExMjM=
| 1,841
|
Add ljspeech
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-02-08T13:22:26Z
| 2021-03-15T05:59:02Z
| 2021-03-15T05:59:02Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)*
- **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/
- **Data:** *https://keithito.com/LJ-Speech-Dataset/*
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1724
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1724/events
|
https://github.com/huggingface/datasets/issues/1724
| 784,023,338
|
MDU6SXNzdWU3ODQwMjMzMzg=
| 1,724
|
could not run models on a offline server successfully
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"events_url": "https://api.github.com/users/lkcao/events{/privacy}",
"followers_url": "https://api.github.com/users/lkcao/followers",
"following_url": "https://api.github.com/users/lkcao/following{/other_user}",
"gists_url": "https://api.github.com/users/lkcao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lkcao",
"id": 49967236,
"login": "lkcao",
"node_id": "MDQ6VXNlcjQ5OTY3MjM2",
"organizations_url": "https://api.github.com/users/lkcao/orgs",
"received_events_url": "https://api.github.com/users/lkcao/received_events",
"repos_url": "https://api.github.com/users/lkcao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lkcao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkcao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lkcao"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Transferred to `datasets` based on the stack trace.",
"Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/blob/master/datasets/text/text.py.\r\nThen you can change the line 221 of `run_mlm_new.py` into:\r\n```python\r\n datasets = load_dataset('/path/to/text.py', data_files=data_files)\r\n```\r\nWhere `/path/to/text.py` is the path on the server where you saved the `text.py` script.",
"We're working on including the local dataset builders (csv, text, json etc.) directly in the `datasets` package so that they can be used offline",
"The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon",
"> The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\n> You can now use them offline\r\n> \r\n> ```python\r\n> datasets = load_dataset('text', data_files=data_files)\r\n> ```\r\n> \r\n> We'll do a new release soon\r\n\r\nso the new version release now?",
"Yes it's been available since datasets 1.3.0 !"
] | 2021-01-12T06:08:06Z
| 2022-10-05T12:39:07Z
| 2022-10-05T12:39:07Z
|
NONE
| null | null | null |
Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:

is there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out...
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5431
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5431/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5431/events
|
https://github.com/huggingface/datasets/issues/5431
| 1,535,862,621
|
I_kwDODunzps5bi2dd
| 5,431
|
CI benchmarks are broken: Unknown arguments: runnerPath, path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-01-17T06:49:57Z
| 2023-01-18T06:33:24Z
| 2023-01-17T08:51:18Z
|
MEMBER
| null | null | null |
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
cml send-comment <markdown file>
Global Options:
--log Logging verbosity
[string] [choices: "error", "warn", "info", "debug"] [default: "info"]
--driver Git provider where the repository is hosted
[string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the
environment]
--repo Repository URL or slug
[string] [default: infer from the environment]
--driver-token, --token CI driver personal/project access token (PAT)
[string] [default: infer from the environment]
--help Show help [boolean]
Options:
--target Comment type (`commit`, `pr`, `commit/f00bar`,
`pr/42`, `issue/1337`),default is automatic (`pr`
but fallback to `commit`). [string]
--watch Watch for changes and automatically update the
comment [boolean]
--publish Upload any local images found in the Markdown
report [boolean] [default: true]
--publish-url Self-hosted image server URL
[string] [default: "https://asset.cml.dev/"]
--publish-native, --native Uses driver's native capabilities to upload assets
instead of CML's storage; not available on GitHub
[boolean]
--watermark-title Hidden comment marker (used for targeting in
subsequent `cml comment update`); "{workflow}" &
"{run}" are auto-replaced [string] [default: ""]
Unknown arguments: runnerPath, path
Error: Process completed with exit code 1.
```
Issue reported to iterative/cml:
- iterative/cml#1319
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5431/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5609/events
|
https://github.com/huggingface/datasets/issues/5609
| 1,610,062,862
|
I_kwDODunzps5f95wO
| 5,609
|
`load_from_disk` vs `load_dataset` performance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when I've got a moment.",
"@mariosasko is that fix released to pip in the meantime? Asking cause im facing still the same issue (regarding loading images from local paths):\r\n```\r\ndataset = load_dataset(\"csv\", cache_dir=\"cache\", data_files=[\"/STORAGE/DATA/mijam/vit/code/list_filtered.csv\"], num_proc=16, split=\"train\").cast_column(\"image\", Image())\r\ndataset = dataset.class_encode_column(\"label\")\r\n```\r\nquite fast. \r\n\r\nThen I do `save_to_disk()` and some time later:\r\n```\r\ndataset = load_from_disk('/STORAGE/DATA/mijam/accel/saved_arrow_big')\r\n```\r\nreally slow. In theory it should be quicked since it only loads arrow files, no conversions and so on.\r\n",
"@mjamroz I assume your CSV file stores image file paths. This means `save_to_disk` needs to embed the image bytes resulting in a much bigger Arrow file (than the initial one). Maybe specifying `num_shards` to make the Arrow files smaller can help (large Arrow files on some systems take a long time to load)."
] | 2023-03-05T05:27:15Z
| 2023-07-13T18:48:05Z
| null |
NONE
| null | null | null |
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3946/events
|
https://github.com/huggingface/datasets/pull/3946
| 1,171,239,287
|
PR_kwDODunzps40i1L3
| 3,946
|
Add newline to text dataset builder for controlling universal newlines mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3946). All of your documentation changes will be reflected on that endpoint.",
"The failing CI test has nothing to do with this PR.",
"I'm closing this PR."
] | 2022-03-16T16:11:11Z
| 2023-09-24T10:10:50Z
| 2023-09-24T10:10:47Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3946",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3946"
}
|
Fix #3804.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3946/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3946/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1824/events
|
https://github.com/huggingface/datasets/pull/1824
| 802,048,281
|
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
| 1,824
|
Add OSCAR dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:",
"Next week !",
"Closing in favor of #1833"
] | 2021-02-05T10:30:26Z
| 2021-05-05T18:24:14Z
| 2021-02-08T11:30:33Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1824",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1824"
}
|
I started adding the dataset card for OSCAR !
For now it's just basic info for all the different configurations in `Dataset Structure`.
In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D
Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/709/events
|
https://github.com/huggingface/datasets/issues/709
| 714,067,902
|
MDU6SXNzdWU3MTQwNjc5MDI=
| 709
|
How to use similarity settings other then "BM25" in Elasticsearch index ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsankar",
"id": 431890,
"login": "nsankar",
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"repos_url": "https://api.github.com/users/nsankar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsankar"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed to datasets\r\n\r\n```\r\ncurl -X PUT \"localhost:9200/index?pretty\" -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"similarity\": {\r\n \"my_similarity\": {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n```"
] | 2020-10-03T11:18:49Z
| 2022-10-04T17:19:37Z
| 2022-10-04T17:19:37Z
|
NONE
| null | null | null |
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/709/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6432
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6432/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6432/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6432/events
|
https://github.com/huggingface/datasets/issues/6432
| 1,999,258,140
|
I_kwDODunzps53KkIc
| 6,432
|
load_dataset does not load all of the data in my input file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121301001?v=4",
"events_url": "https://api.github.com/users/demongolem-biz2/events{/privacy}",
"followers_url": "https://api.github.com/users/demongolem-biz2/followers",
"following_url": "https://api.github.com/users/demongolem-biz2/following{/other_user}",
"gists_url": "https://api.github.com/users/demongolem-biz2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/demongolem-biz2",
"id": 121301001,
"login": "demongolem-biz2",
"node_id": "U_kgDOBzroCQ",
"organizations_url": "https://api.github.com/users/demongolem-biz2/orgs",
"received_events_url": "https://api.github.com/users/demongolem-biz2/received_events",
"repos_url": "https://api.github.com/users/demongolem-biz2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/demongolem-biz2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demongolem-biz2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/demongolem-biz2"
}
|
[] |
open
| false
| null |
[] | null |
[
"You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves."
] | 2023-11-17T14:28:50Z
| 2023-11-22T17:34:58Z
| null |
NONE
| null | null | null |
### Describe the bug
I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements.
### Steps to reproduce the bug
train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.VALIDATION)
logger.info(len(train_dataset))
logger.info(len(valid_dataset))
Both train and valid input are 127 items. However, they both only load 124 items. The input format is in json. At the end of the day, I am trying to create .pt files.
### Expected behavior
I see all 127 elements in my dataset when performing len
### Environment info
Python 3.10. CentOS operating system. nlp==0.40, datasets==2.14.5, transformers==4.26.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6432/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6432/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4459
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4459/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4459/events
|
https://github.com/huggingface/datasets/pull/4459
| 1,264,636,481
|
PR_kwDODunzps45UFc8
| 4,459
|
Add and fix language tags for udhr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-08T12:03:42Z
| 2022-06-08T12:36:24Z
| 2022-06-08T12:27:13Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4459",
"merged_at": "2022-06-08T12:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4459"
}
|
Related to #4362.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4459/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/298
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/298/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/298/comments
|
https://api.github.com/repos/huggingface/datasets/issues/298/events
|
https://github.com/huggingface/datasets/pull/298
| 643,603,804
|
MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4
| 298
|
Add searchable datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks very cool! Only looked at it superficially though",
"Alright I think I've checked all your comments, thanks :)\r\n\r\nMoreover I just added a way to serialize faiss indexes.\r\nThis is important because for big datasets the index construction can take some time.\r\n\r\nExamples:\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.get_index('embeddings').save('my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nfaiss_index = nlp.search.FaissIndex.load('my_index.faiss')\r\nds.add_faiss_index('embeddings', faiss_index=faiss_index)\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```\r\n\r\nLet me know what you think",
"Nice!\r\n\r\nHere are a few comments:\r\n\r\nI think it would be good to separate (1) the name of the column we use for indexing and (2) the name of the index itself, at least in our head. As I understand it, once the index is created, the column we used to create it is irrelevant so the column name will only be relevant in the `add_faiss_index` and we should be able to supply a different index name, e.g. `my_faiss_index`. When we reload an index, we don't really care about the column that was used to create it, right? so it's maybe better to have an `index_name` (which default to the column name for a simple user experience but it can also be something else and this should be clear in our head when we define the API).\r\n\r\nI'm wondering if we should not have a triple of methods for each retrieval engine: `add_xxx_index`, `save_xxx_index` and `load_xxx_index` when `xxx` can be `faiss` or `elasticsearch`. I'm not a fan of exposing `nlp.search.FaissIndex` unless you think there is a strong reason to have the user learn this abstraction.\r\n\r\nLast but not least, I think we should already think about hosting index on our S3. I would maybe go for something like this: host the index serialized with the cached dataset on user-provided namespaces:\r\n```python\r\nwiki_indexed = load_dataset('thom/wiki_indexed_with_dpr_faiss')\r\n```",
"I agree, I just changed to using `index_name` and having add/save/load methods",
"To summarize:\r\n\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nds.load_faiss_index('embeddings', 'my_index.faiss')\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```",
"Good to me. I understand that for now there is no check that the index matches the dataset on loading.\r\nMaybe just add a basic test on the number of examples?",
"Ok I think this one is ready now",
"Looks like the CI is having troubles to pass because of `tests/test_dataset_common.py::AWSDatasetTest::test_builder_configs_{<insert_rando_dataset_name_here>}`, `requests.exceptions.ConnectionError` :/"
] | 2020-06-23T07:33:03Z
| 2020-06-26T07:50:44Z
| 2020-06-26T07:50:43Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/298.diff",
"html_url": "https://github.com/huggingface/datasets/pull/298",
"merged_at": "2020-06-26T07:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/298.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/298"
}
|
# Better support for Numpy format + Add Indexed Datasets
I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib.
## Better support for Numpy format
New features:
- New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas.
- Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays.
Pandas offers fast zero-copy Numpy arrays conversion from Arrow structures.
Using it we can speed up the reading of memory-mapped Numpy array stored in Arrow format.
With these changes you can easily compute embeddings of texts using `.map()`. For example:
```python
def embed(text):
tokenized_example = tokenizer.encode(text, return_tensors="pt")
embeddings = bert_encoder(tokenized_examples).numpy()
return embeddings
dset_with_embeddings = dset.map(lambda example: {"embeddings": embed(example["text])})
```
And then reading the embeddings from the arrow format is be very fast.
PS1: Note that right now only 1d arrays are supported.
PS2: It seems possible to do without pandas but it will require more _trickery_.
PS3: I did a simple benchmark with google colab that you can view here:
https://colab.research.google.com/drive/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing
## Add Indexed Datasets
For many retrieval tasks it is convenient to index a dataset to be able to run fast queries.
For example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important.
Therefore I added two ways to add an index to a column of a dataset:
1) You can index it using a Dense Index like Faiss. It is used to index vectors.
Faiss is a library for efficient similarity search and clustering of dense vectors.
It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM.
2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity.
Example of usage:
```python
ds = nlp.load_dataset('crime_and_punish', split='train')
ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array`
ds_with_embeddings.add_vector_index(column='embeddings')
scores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10)
```
```python
ds = nlp.load_dataset('crime_and_punish', split='train')
es_client = elasticsearch.Elasticsearch()
ds.add_text_index(column='line', es_client=es_client, index_name="my_es_index")
scores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10)
```
PS4: Faiss allows to specify many options for the [index](https://github.com/facebookresearch/faiss/wiki/The-index-factory) and for [GPU settings](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU). I made sure that the user has full control over those settings.
## Tests
I added tests for Faiss, Elasticsearch and indexed datasets.
I had to edit the CI config because all the test scripts were not being run by CircleCI.
------------------
I'd be really happy to have some feedbacks :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/298/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/298/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5946/events
|
https://github.com/huggingface/datasets/issues/5946
| 1,754,234,469
|
I_kwDODunzps5oj35l
| 5,946
|
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/70565543?v=4",
"events_url": "https://api.github.com/users/syngokhan/events{/privacy}",
"followers_url": "https://api.github.com/users/syngokhan/followers",
"following_url": "https://api.github.com/users/syngokhan/following{/other_user}",
"gists_url": "https://api.github.com/users/syngokhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/syngokhan",
"id": 70565543,
"login": "syngokhan",
"node_id": "MDQ6VXNlcjcwNTY1NTQz",
"organizations_url": "https://api.github.com/users/syngokhan/orgs",
"received_events_url": "https://api.github.com/users/syngokhan/received_events",
"repos_url": "https://api.github.com/users/syngokhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/syngokhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syngokhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/syngokhan"
}
|
[] |
open
| false
| null |
[] | null |
[
"https://colab.research.google.com/#scrollTo=AQ_HCYruWIHU&fileId=https%3A//huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb\r\n\r\nI ran the same administration exactly the same but got the same error",
"Looks related to https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/4?u=lhoestq",
"> Looks related to https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/4?u=lhoestq\n\nThe problem has not been solved, I have tried this before, but the problem is the same",
"> \r\n\r\n@syngokhan did u solve it? \r\nI am desperate ",
"data = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False) # change this line to -\r\n\r\ndata[\"train\"] = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False)\r\nAfter doing this change you code should run fine.",
"> > \r\n> \r\n> @syngokhan did u solve it? I am desperate\r\n\r\nrefer to my earlier comment. you will find the solution."
] | 2023-06-13T07:34:15Z
| 2023-07-14T12:04:48Z
| null |
NONE
| null | null | null |
### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │
│ │
│ 1786 │ │ │ │ rng_to_sync = True │
│ 1787 │ │ │ │
│ 1788 │ │ │ step = -1 │
│ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1790 │ │ │ │ total_batched_samples += 1 │
│ 1791 │ │ │ │ if rng_to_sync: │
│ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │
│ │
│ 374 │ │ dataloader_iter = super().__iter__() │
│ 375 │ │ # We iterate one batch ahead to check when we are at the end │
│ 376 │ │ try: │
│ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │
│ 378 │ │ except StopIteration: │
│ 379 │ │ │ yield │
│ 380 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory: │
│ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │
│ 680 │ │ return data │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │
│ │
│ 46 │ def fetch(self, possibly_batched_index): │
│ 47 │ │ if self.auto_collation: │
│ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │
│ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │
│ 50 │ │ │ else: │
│ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │
│ 52 │ │ else: │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │
│ │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ ❱ 2782 │ │ batch = self.__getitem__(keys) │
│ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │
│ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │
│ 2785 │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │
│ │
│ 2775 │ │
│ 2776 │ def __getitem__(self, key): # noqa: F811 │
│ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │
│ ❱ 2778 │ │ return self._getitem(key) │
│ 2779 │ │
│ 2780 │ def __getitems__(self, keys: List) -> List: │
│ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │
│ │
│ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │
│ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │
│ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │
│ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │
│ 2763 │ │ formatted_output = format_table( │
│ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │
│ 2765 │ │ ) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │
│ │
│ 575 │ │ _check_valid_column_key(key, table.column_names) │
│ 576 │ else: │
│ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │
│ ❱ 578 │ │ _check_valid_index_key(key, size) │
│ 579 │ # Query the main table │
│ 580 │ if indices is None: │
│ 581 │ │ pa_subtable = _query_table(table, key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │
│ _check_valid_index_key │
│ │
│ 528 │ │ │ _check_valid_index_key(min(key), size=size) │
│ 529 │ elif isinstance(key, Iterable): │
│ 530 │ │ if len(key) > 0: │
│ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │
│ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │
│ 533 │ else: │
│ 534 │ │ _raise_bad_key_type(key) │
│ │
│ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │
│ _check_valid_index_key │
│ │
│ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │
│ 519 │ if isinstance(key, int): │
│ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │
│ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │
│ 522 │ │ return │
│ 523 │ elif isinstance(key, slice): │
│ 524 │ │ pass
### Steps to reproduce the bug
``
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map = "auto",
trust_remote_code = True,
quantization_config = bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r = 16,
lora_alpha = 32,
target_modules = ["query_key_value"],
lora_dropout = 0.05,
bias = "none",
task_type = "CASUAL_LM"
)
model = get_peft_model(model,config)
print_trainable_parameters(model)
def generate_prompt(data_point):
return f"""
<human>: {data_point["question"]}
<assistant>: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)
return dict({
"input_ids" : tokenized_full_prompt["input_ids"],
"attention_mask" : tokenized_full_prompt["attention_mask"]
})
data = data["train"].shuffle().map(generate_and_tokenize_prompt, batched = False)
OUTPUT_DIR = "experiments"
trainings_args = transformers.TrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4,
num_train_epochs = 1,
learning_rate = 2e-4,
fp16 = True,
save_total_limit = 3,
logging_steps = 1,
output_dir = OUTPUT_DIR,
max_steps = 80,
optim = "paged_adamw_8bit",
lr_scheduler_type = "cosine",
warmup_ratio = 0.05,
#remove_unused_columns=True
)
trainer = transformers.Trainer(
model = model,
train_dataset = data,
args = trainings_args,
data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
IndexError: Invalid key: 32 is out of bounds for size 0
DataSet Format is like :
[{"question": "How can I create an account?", "answer": "To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process."}, .... ]
### Expected behavior
-
### Environment info
!pip install -q pip
!pip install -q bitsandbytes==0.39.0
!pip install -q torch==2.0.1
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git
!pip install -q datasets
!pip install -q loralib==0.1.1
!pip install -q einops==0.6.1
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5946/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4850/events
|
https://github.com/huggingface/datasets/pull/4850
| 1,338,702,306
|
PR_kwDODunzps49KnZ8
| 4,850
|
Fix test of _get_extraction_protocol for TAR files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-15T08:37:58Z
| 2022-08-15T09:42:56Z
| 2022-08-15T09:28:46Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4850",
"merged_at": "2022-08-15T09:28:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4850"
}
|
While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar]
```
This PR:
- refactors the test so that it tests the raise of the exceptions instead of xfailing
- fixes the test for TAR files: it does not raise an exception, but returns "tar"
- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4850/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4850/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6422
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6422/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6422/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6422/events
|
https://github.com/huggingface/datasets/issues/6422
| 1,994,579,267
|
I_kwDODunzps524t1D
| 6,422
|
Allow to choose the `writer_batch_size` when using `save_to_disk`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38216711?v=4",
"events_url": "https://api.github.com/users/NathanGodey/events{/privacy}",
"followers_url": "https://api.github.com/users/NathanGodey/followers",
"following_url": "https://api.github.com/users/NathanGodey/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanGodey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NathanGodey",
"id": 38216711,
"login": "NathanGodey",
"node_id": "MDQ6VXNlcjM4MjE2NzEx",
"organizations_url": "https://api.github.com/users/NathanGodey/orgs",
"received_events_url": "https://api.github.com/users/NathanGodey/received_events",
"repos_url": "https://api.github.com/users/NathanGodey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NathanGodey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanGodey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NathanGodey"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```",
"Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/blob/2.14.5/src/datasets/arrow_dataset.py`, every function involved (`select`, `shard`, ...) has a default hardcoded batch size of 1000, as such:\r\n```python\r\ndef select(\r\n self,\r\n indices: Iterable,\r\n keep_in_memory: bool = False,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n...\r\n```\r\nThen, `ArrowWriter` is instantiated with the specified `writer_batch_size`. In `ArrowWriter`, `writer_batch_size` is set to `datasets.config.DEFAULT_MAX_BATCH_SIZE` if it is `None`(https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L345C14-L345C31). However, in our case, it is already set to 1000 by \"parent\" methods, so it won't happen.\r\n\r\nNevertheless, due to this: \r\n```python\r\ndef _save_to_disk_single(job_id: int, shard: \"Dataset\", fpath: str, storage_options: Optional[dict]):\r\n batch_size = config.DEFAULT_MAX_BATCH_SIZE\r\n...\r\n```\r\nit seems to work. I will use it as such, but it should maybe be added to documentation? And maybe improved in next versions?"
] | 2023-11-15T11:18:34Z
| 2023-11-16T10:00:21Z
| null |
NONE
| null | null | null |
### Feature request
Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods.
### Motivation
The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in RAM saturation when using a lot of processes on long text sequences or other modalities, or for specific IO configs.
### Your contribution
I would be glad to submit a PR, as long as it does not imply extensive tests refactoring.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6422/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6422/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4538/events
|
https://github.com/huggingface/datasets/issues/4538
| 1,279,409,786
|
I_kwDODunzps5MQj56
| 4,538
|
Dataset Viewer issue for Pile of Law
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Breakend",
"id": 1609857,
"login": "Breakend",
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"repos_url": "https://api.github.com/users/Breakend/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Breakend"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null |
[
"Hi @Breakend, yes – we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] | 2022-06-22T02:48:40Z
| 2022-06-27T07:30:23Z
| 2022-06-26T22:26:22Z
|
NONE
| null | null | null |
### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3080
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3080/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3080/events
|
https://github.com/huggingface/datasets/issues/3080
| 1,026,380,626
|
I_kwDODunzps49LVNS
| 3,080
|
Error related to timeout keyword argument
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2021-10-14T13:10:58Z
| 2021-10-14T14:39:51Z
| 2021-10-14T14:39:51Z
|
MEMBER
| null | null | null |
## Describe the bug
As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: dataset_info() got an unexpected keyword argument 'timeout'
```
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3080/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6231
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6231/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6231/events
|
https://github.com/huggingface/datasets/pull/6231
| 1,890,863,249
|
PR_kwDODunzps5aCr8_
| 6,231
|
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.",
"realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ",
"I think https://github.com/huggingface/datasets/pull/6218 fixed the issue (a bit differently though)",
"ah actually nope, let me check",
"@lhoestq yeah the pr you're referencing doesn't fix the problem when two semantically analogous configs occur in datasets_info.json, i suggest to rewrite the legacy one if it exists during .push_to_hub",
"Only the old versions of `datasets` use the JSON file over the README and they can only load one config so the name doesn't really matter.\r\n\r\nThat's why I chose to load the info from the JSON no matter the name (no check to see if it's \"username--dataset_name\") in my previous PR.\r\n\r\nI think you can remove the old info without even checking the name. In this case maybe no need to update load.py ",
"(also minor: not checking the name makes it more robust to dataset renaming)",
"@lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with `dataset_infos.json` having two keys in it?",
"> @lhoestq okay makes sense... so you think it's not a problem that in some cases we might end up with dataset_infos.json having two keys in it?\r\n\r\nIdeally they should have only one config no ? Since old versions of `datasets` simply load the first config in the JSON.\r\nWe can overwrite it with the new default one (and no matter the name of the outdated config in the JSON)\r\n\r\n"
] | 2023-09-11T16:27:09Z
| 2023-09-26T11:19:36Z
| null |
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6231",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6231"
}
|
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in this case.
Also, in `load.py` I suggest to check if a legacy config name is indeed a legacy config name because after this fix it might not be the case (this check was first introduced in https://github.com/huggingface/datasets/pull/6218)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6231/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6107
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6107/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6107/events
|
https://github.com/huggingface/datasets/pull/6107
| 1,829,625,320
|
PR_kwDODunzps5W0rLR
| 6,107
|
Fix deprecation of use_auth_token in file_utils
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007678 / 0.011353 (-0.003675) | 0.004233 / 0.011008 (-0.006776) | 0.095934 / 0.038508 (0.057426) | 0.064201 / 0.023109 (0.041092) | 0.345765 / 0.275898 (0.069867) | 0.383089 / 0.323480 (0.059609) | 0.004084 / 0.007986 (-0.003902) | 0.003311 / 0.004328 (-0.001017) | 0.072367 / 0.004250 (0.068117) | 0.048252 / 0.037052 (0.011200) | 0.338340 / 0.258489 (0.079851) | 0.391627 / 0.293841 (0.097786) | 0.045203 / 0.128546 (-0.083343) | 0.013494 / 0.075646 (-0.062153) | 0.314097 / 0.419271 (-0.105174) | 0.058183 / 0.043533 (0.014650) | 0.353946 / 0.255139 (0.098807) | 0.385181 / 0.283200 (0.101981) | 0.033111 / 0.141683 (-0.108572) | 1.578489 / 1.452155 (0.126335) | 1.631660 / 1.492716 (0.138944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202592 / 0.018006 (0.184586) | 0.506450 / 0.000490 (0.505961) | 0.004630 / 0.000200 (0.004430) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024761 / 0.037411 (-0.012651) | 0.086295 / 0.014526 (0.071769) | 0.094063 / 0.176557 (-0.082494) | 0.154189 / 0.737135 (-0.582947) | 0.096273 / 0.296338 (-0.200065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581731 / 0.215209 (0.366522) | 5.552020 / 2.077655 (3.474365) | 2.430800 / 1.504120 (0.926680) | 2.130864 / 1.541195 (0.589669) | 2.092802 / 1.468490 (0.624312) | 0.833956 / 4.584777 (-3.750821) | 4.840859 / 3.745712 (1.095147) | 4.267812 / 5.269862 (-1.002050) | 2.663245 / 4.565676 (-1.902432) | 0.093195 / 0.424275 (-0.331080) | 0.007942 / 0.007607 (0.000335) | 0.651457 / 0.226044 (0.425413) | 6.782986 / 2.268929 (4.514058) | 3.103307 / 55.444624 (-52.341318) | 2.373933 / 6.876477 (-4.502544) | 2.571613 / 2.142072 (0.429540) | 0.981389 / 4.805227 (-3.823839) | 0.199019 / 6.500664 (-6.301645) | 0.065828 / 0.075469 (-0.009641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429778 / 1.841788 (-0.412009) | 20.967563 / 8.074308 (12.893255) | 19.329723 / 10.191392 (9.138331) | 0.222048 / 0.680424 (-0.458376) | 0.033507 / 0.534201 (-0.500694) | 0.436801 / 0.579283 (-0.142482) | 0.530197 / 0.434364 (0.095833) | 0.491532 / 0.540337 (-0.048805) | 0.718216 / 1.386936 (-0.668720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007798 / 0.011353 (-0.003555) | 0.004748 / 0.011008 (-0.006260) | 0.070847 / 0.038508 (0.032339) | 0.069338 / 0.023109 (0.046229) | 0.400890 / 0.275898 (0.124992) | 0.429482 / 0.323480 (0.106002) | 0.006469 / 0.007986 (-0.001517) | 0.003514 / 0.004328 (-0.000814) | 0.069049 / 0.004250 (0.064798) | 0.059800 / 0.037052 (0.022748) | 0.415644 / 0.258489 (0.157155) | 0.432562 / 0.293841 (0.138721) | 0.043778 / 0.128546 (-0.084768) | 0.015141 / 0.075646 (-0.060506) | 0.081521 / 0.419271 (-0.337750) | 0.054692 / 0.043533 (0.011160) | 0.404497 / 0.255139 (0.149358) | 0.419783 / 0.283200 (0.136583) | 0.029588 / 0.141683 (-0.112094) | 1.593506 / 1.452155 (0.141351) | 1.615977 / 1.492716 (0.123261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270981 / 0.018006 (0.252975) | 0.522074 / 0.000490 (0.521584) | 0.026568 / 0.000200 (0.026368) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031551 / 0.037411 (-0.005861) | 0.086723 / 0.014526 (0.072197) | 0.103315 / 0.176557 (-0.073242) | 0.154692 / 0.737135 (-0.582443) | 0.099472 / 0.296338 (-0.196866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570238 / 0.215209 (0.355029) | 5.655963 / 2.077655 (3.578308) | 2.662670 / 1.504120 (1.158550) | 2.380903 / 1.541195 (0.839709) | 2.409467 / 1.468490 (0.940977) | 0.828055 / 4.584777 (-3.756722) | 4.964698 / 3.745712 (1.218986) | 4.299995 / 5.269862 (-0.969867) | 2.824162 / 4.565676 (-1.741514) | 0.095872 / 0.424275 (-0.328403) | 0.007907 / 0.007607 (0.000300) | 0.701595 / 0.226044 (0.475551) | 7.131965 / 2.268929 (4.863036) | 3.250554 / 55.444624 (-52.194070) | 2.531916 / 6.876477 (-4.344561) | 2.717908 / 2.142072 (0.575835) | 1.014479 / 4.805227 (-3.790748) | 0.223804 / 6.500664 (-6.276861) | 0.071893 / 0.075469 (-0.003576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541702 / 1.841788 (-0.300086) | 21.668219 / 8.074308 (13.593911) | 18.916032 / 10.191392 (8.724640) | 0.205915 / 0.680424 (-0.474508) | 0.026356 / 0.534201 (-0.507845) | 0.429122 / 0.579283 (-0.150161) | 0.506110 / 0.434364 (0.071746) | 0.510148 / 0.540337 (-0.030190) | 0.724699 / 1.386936 (-0.662237) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006884 / 0.011353 (-0.004469) | 0.004492 / 0.011008 (-0.006516) | 0.085439 / 0.038508 (0.046931) | 0.083905 / 0.023109 (0.060796) | 0.313604 / 0.275898 (0.037706) | 0.354683 / 0.323480 (0.031203) | 0.006535 / 0.007986 (-0.001451) | 0.004318 / 0.004328 (-0.000011) | 0.066129 / 0.004250 (0.061879) | 0.057568 / 0.037052 (0.020516) | 0.317162 / 0.258489 (0.058672) | 0.372501 / 0.293841 (0.078660) | 0.031059 / 0.128546 (-0.097488) | 0.009013 / 0.075646 (-0.066634) | 0.288794 / 0.419271 (-0.130478) | 0.053326 / 0.043533 (0.009793) | 0.314318 / 0.255139 (0.059179) | 0.357505 / 0.283200 (0.074305) | 0.027020 / 0.141683 (-0.114663) | 1.530653 / 1.452155 (0.078498) | 1.599782 / 1.492716 (0.107066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278788 / 0.018006 (0.260782) | 0.626822 / 0.000490 (0.626333) | 0.003780 / 0.000200 (0.003580) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031703 / 0.037411 (-0.005708) | 0.085654 / 0.014526 (0.071128) | 0.754858 / 0.176557 (0.578301) | 0.212251 / 0.737135 (-0.524885) | 0.171344 / 0.296338 (-0.124994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382291 / 0.215209 (0.167082) | 3.825612 / 2.077655 (1.747958) | 1.874553 / 1.504120 (0.370433) | 1.712574 / 1.541195 (0.171379) | 1.791479 / 1.468490 (0.322989) | 0.481005 / 4.584777 (-4.103772) | 3.530559 / 3.745712 (-0.215153) | 3.395305 / 5.269862 (-1.874557) | 2.133747 / 4.565676 (-2.431930) | 0.056139 / 0.424275 (-0.368136) | 0.007424 / 0.007607 (-0.000183) | 0.458321 / 0.226044 (0.232277) | 4.577665 / 2.268929 (2.308736) | 2.380233 / 55.444624 (-53.064392) | 2.004060 / 6.876477 (-4.872417) | 2.290712 / 2.142072 (0.148639) | 0.570157 / 4.805227 (-4.235070) | 0.131670 / 6.500664 (-6.368994) | 0.060684 / 0.075469 (-0.014785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294929 / 1.841788 (-0.546858) | 21.386663 / 8.074308 (13.312355) | 14.389440 / 10.191392 (4.198048) | 0.171177 / 0.680424 (-0.509247) | 0.018660 / 0.534201 (-0.515541) | 0.394385 / 0.579283 (-0.184898) | 0.424942 / 0.434364 (-0.009422) | 0.463618 / 0.540337 (-0.076719) | 0.651499 / 1.386936 (-0.735437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007079 / 0.011353 (-0.004274) | 0.004615 / 0.011008 (-0.006393) | 0.066300 / 0.038508 (0.027792) | 0.092636 / 0.023109 (0.069527) | 0.399080 / 0.275898 (0.123182) | 0.429873 / 0.323480 (0.106393) | 0.006689 / 0.007986 (-0.001297) | 0.004358 / 0.004328 (0.000029) | 0.067155 / 0.004250 (0.062905) | 0.064040 / 0.037052 (0.026988) | 0.399905 / 0.258489 (0.141416) | 0.448237 / 0.293841 (0.154397) | 0.031985 / 0.128546 (-0.096561) | 0.009053 / 0.075646 (-0.066593) | 0.071904 / 0.419271 (-0.347368) | 0.048759 / 0.043533 (0.005227) | 0.386797 / 0.255139 (0.131658) | 0.411240 / 0.283200 (0.128040) | 0.028568 / 0.141683 (-0.113115) | 1.501037 / 1.452155 (0.048882) | 1.594560 / 1.492716 (0.101844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300756 / 0.018006 (0.282750) | 0.631220 / 0.000490 (0.630730) | 0.010163 / 0.000200 (0.009963) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033716 / 0.037411 (-0.003695) | 0.093562 / 0.014526 (0.079037) | 0.106975 / 0.176557 (-0.069582) | 0.161919 / 0.737135 (-0.575216) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410392 / 0.215209 (0.195183) | 4.094411 / 2.077655 (2.016756) | 2.085868 / 1.504120 (0.581748) | 1.959589 / 1.541195 (0.418394) | 2.096683 / 1.468490 (0.628193) | 0.494593 / 4.584777 (-4.090184) | 3.854302 / 3.745712 (0.108590) | 3.742303 / 5.269862 (-1.527558) | 2.379983 / 4.565676 (-2.185693) | 0.058640 / 0.424275 (-0.365635) | 0.008092 / 0.007607 (0.000484) | 0.486957 / 0.226044 (0.260912) | 4.855784 / 2.268929 (2.586855) | 2.654029 / 55.444624 (-52.790595) | 2.237627 / 6.876477 (-4.638850) | 2.536955 / 2.142072 (0.394882) | 0.622398 / 4.805227 (-4.182829) | 0.139212 / 6.500664 (-6.361452) | 0.062805 / 0.075469 (-0.012664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374862 / 1.841788 (-0.466926) | 22.797015 / 8.074308 (14.722707) | 14.393995 / 10.191392 (4.202603) | 0.196603 / 0.680424 (-0.483821) | 0.018602 / 0.534201 (-0.515599) | 0.394568 / 0.579283 (-0.184715) | 0.408792 / 0.434364 (-0.025572) | 0.486706 / 0.540337 (-0.053631) | 0.652365 / 1.386936 (-0.734571) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-31T16:32:01Z
| 2023-08-03T10:13:32Z
| 2023-08-03T10:04:18Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6107",
"merged_at": "2023-08-03T10:04:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6107"
}
|
Fix issues with the deprecation of `use_auth_token` introduced by:
- #5996
in functions:
- `get_authentication_headers_for_url`
- `request_etag`
- `get_from_cache`
Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588
```
FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
```
Related to:
- #6094
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6107/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4345
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4345/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4345/events
|
https://github.com/huggingface/datasets/pull/4345
| 1,235,062,787
|
PR_kwDODunzps43xrky
| 4,345
|
Fix never ending GH Action to build documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-13T10:40:10Z
| 2022-05-13T11:29:43Z
| 2022-05-13T11:22:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"merged_at": "2022-05-13T11:22:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345"
}
|
There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4345/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4389
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4389/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4389/events
|
https://github.com/huggingface/datasets/pull/4389
| 1,244,693,690
|
PR_kwDODunzps44RKMn
| 4,389
|
Fix bug in gem dataset for wiki_auto_asset_turk config
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T07:19:49Z
| 2022-05-23T10:38:26Z
| 2022-05-23T10:29:55Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"merged_at": "2022-05-23T10:29:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389"
}
|
This PR fixes some URLs.
Fix #4386.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4389/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1173
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1173/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1173/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1173/events
|
https://github.com/huggingface/datasets/pull/1173
| 757,761,967
|
MDExOlB1bGxSZXF1ZXN0NTMzMDc5MTk0
| 1,173
|
add wikipedia biography dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39712560?v=4",
"events_url": "https://api.github.com/users/alejandrocros/events{/privacy}",
"followers_url": "https://api.github.com/users/alejandrocros/followers",
"following_url": "https://api.github.com/users/alejandrocros/following{/other_user}",
"gists_url": "https://api.github.com/users/alejandrocros/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alejandrocros",
"id": 39712560,
"login": "alejandrocros",
"node_id": "MDQ6VXNlcjM5NzEyNTYw",
"organizations_url": "https://api.github.com/users/alejandrocros/orgs",
"received_events_url": "https://api.github.com/users/alejandrocros/received_events",
"repos_url": "https://api.github.com/users/alejandrocros/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alejandrocros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alejandrocros/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alejandrocros"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Does anyone know why am I getting this \"Some checks were not successful\" message? For the _code_quality_ one, I have successfully run the flake8 command.",
"Ok, I need to update the README.md, but don't know if that will fix the errors",
"Hi @ACR0S , thanks for adding the dataset!\r\n\r\nIt looks like `black` is throwing the code quality error: you need to run `make style` with the latest version of `black` (`black --version` should return 20.8b1)\r\n\r\nWe also added a requirement to specify encodings when using the python `open` function (line 163 in the current version of your script)\r\n\r\nFinally, you will need to add the tags and field descriptions to the README as described here https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLet us know if you have any further questions!",
"Also, please leave the full template of the readme with the `[More Information Needed]` paragraphs: you don't have to fill them out now but it will make it easier for us to go back to later :) ",
"Thank you for your help, @yjernite! I have updated everything (finally run the _make style_, added the tags, the ecoding to the _open_ function and put back the empty fields in the README). Hope it works now! :)",
"LGTM!",
"merging since the CI is fixed on master"
] | 2020-12-05T19:14:50Z
| 2020-12-07T11:13:14Z
| 2020-12-07T11:13:14Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1173",
"merged_at": "2020-12-07T11:13:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1173"
}
|
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1173/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1173/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6398
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6398/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6398/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6398/events
|
https://github.com/huggingface/datasets/pull/6398
| 1,987,786,446
|
PR_kwDODunzps5fJlP7
| 6,398
|
Remove redundant condition in builders
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004475 / 0.011353 (-0.006878) | 0.002840 / 0.011008 (-0.008168) | 0.061544 / 0.038508 (0.023036) | 0.031237 / 0.023109 (0.008128) | 0.243270 / 0.275898 (-0.032628) | 0.271903 / 0.323480 (-0.051577) | 0.002906 / 0.007986 (-0.005080) | 0.003118 / 0.004328 (-0.001210) | 0.047362 / 0.004250 (0.043112) | 0.047840 / 0.037052 (0.010788) | 0.244044 / 0.258489 (-0.014445) | 0.279310 / 0.293841 (-0.014531) | 0.023408 / 0.128546 (-0.105138) | 0.007110 / 0.075646 (-0.068536) | 0.207328 / 0.419271 (-0.211943) | 0.058463 / 0.043533 (0.014930) | 0.245631 / 0.255139 (-0.009508) | 0.267755 / 0.283200 (-0.015445) | 0.018147 / 0.141683 (-0.123536) | 1.086877 / 1.452155 (-0.365278) | 1.155380 / 1.492716 (-0.337337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091925 / 0.018006 (0.073919) | 0.299858 / 0.000490 (0.299368) | 0.000232 / 0.000200 (0.000032) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018416 / 0.037411 (-0.018995) | 0.062608 / 0.014526 (0.048082) | 0.073897 / 0.176557 (-0.102660) | 0.120216 / 0.737135 (-0.616919) | 0.075788 / 0.296338 (-0.220550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287823 / 0.215209 (0.072614) | 2.797546 / 2.077655 (0.719891) | 1.470878 / 1.504120 (-0.033242) | 1.347497 / 1.541195 (-0.193698) | 1.363837 / 1.468490 (-0.104653) | 0.400069 / 4.584777 (-4.184708) | 2.338870 / 3.745712 (-1.406842) | 2.564075 / 5.269862 (-2.705787) | 1.568454 / 4.565676 (-2.997222) | 0.047103 / 0.424275 (-0.377172) | 0.004783 / 0.007607 (-0.002824) | 0.345244 / 0.226044 (0.119200) | 3.407752 / 2.268929 (1.138823) | 1.826552 / 55.444624 (-53.618073) | 1.536714 / 6.876477 (-5.339763) | 1.543138 / 2.142072 (-0.598934) | 0.478996 / 4.805227 (-4.326232) | 0.099580 / 6.500664 (-6.401085) | 0.041994 / 0.075469 (-0.033475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947106 / 1.841788 (-0.894682) | 11.391262 / 8.074308 (3.316954) | 10.531141 / 10.191392 (0.339749) | 0.141497 / 0.680424 (-0.538927) | 0.014214 / 0.534201 (-0.519987) | 0.269346 / 0.579283 (-0.309937) | 0.268129 / 0.434364 (-0.166235) | 0.309496 / 0.540337 (-0.230841) | 0.429207 / 1.386936 (-0.957729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004770 / 0.011353 (-0.006583) | 0.002878 / 0.011008 (-0.008130) | 0.048248 / 0.038508 (0.009740) | 0.051068 / 0.023109 (0.027959) | 0.272076 / 0.275898 (-0.003822) | 0.292423 / 0.323480 (-0.031057) | 0.004016 / 0.007986 (-0.003970) | 0.002522 / 0.004328 (-0.001807) | 0.047617 / 0.004250 (0.043367) | 0.038168 / 0.037052 (0.001115) | 0.275236 / 0.258489 (0.016746) | 0.303811 / 0.293841 (0.009970) | 0.023816 / 0.128546 (-0.104730) | 0.007177 / 0.075646 (-0.068469) | 0.053453 / 0.419271 (-0.365818) | 0.032425 / 0.043533 (-0.011108) | 0.271620 / 0.255139 (0.016481) | 0.289618 / 0.283200 (0.006418) | 0.017986 / 0.141683 (-0.123697) | 1.154225 / 1.452155 (-0.297930) | 1.224244 / 1.492716 (-0.268472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090477 / 0.018006 (0.072471) | 0.299461 / 0.000490 (0.298971) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022043 / 0.037411 (-0.015369) | 0.070327 / 0.014526 (0.055801) | 0.080132 / 0.176557 (-0.096425) | 0.120007 / 0.737135 (-0.617128) | 0.083037 / 0.296338 (-0.213301) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294538 / 0.215209 (0.079329) | 2.882791 / 2.077655 (0.805136) | 1.582923 / 1.504120 (0.078803) | 1.457091 / 1.541195 (-0.084104) | 1.536149 / 1.468490 (0.067659) | 0.401539 / 4.584777 (-4.183238) | 2.440919 / 3.745712 (-1.304793) | 2.503108 / 5.269862 (-2.766753) | 1.509216 / 4.565676 (-3.056460) | 0.046267 / 0.424275 (-0.378008) | 0.004790 / 0.007607 (-0.002817) | 0.336137 / 0.226044 (0.110093) | 3.331655 / 2.268929 (1.062726) | 1.954228 / 55.444624 (-53.490396) | 1.686637 / 6.876477 (-5.189840) | 1.650278 / 2.142072 (-0.491794) | 0.473895 / 4.805227 (-4.331333) | 0.096908 / 6.500664 (-6.403756) | 0.040387 / 0.075469 (-0.035082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972999 / 1.841788 (-0.868789) | 11.978367 / 8.074308 (3.904059) | 10.861092 / 10.191392 (0.669699) | 0.129054 / 0.680424 (-0.551369) | 0.015988 / 0.534201 (-0.518213) | 0.268827 / 0.579283 (-0.310456) | 0.271714 / 0.434364 (-0.162649) | 0.304045 / 0.540337 (-0.236293) | 0.413158 / 1.386936 (-0.973778) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005286 / 0.011353 (-0.006067) | 0.002860 / 0.011008 (-0.008149) | 0.062449 / 0.038508 (0.023941) | 0.035346 / 0.023109 (0.012237) | 0.241685 / 0.275898 (-0.034213) | 0.268116 / 0.323480 (-0.055364) | 0.003050 / 0.007986 (-0.004935) | 0.003134 / 0.004328 (-0.001194) | 0.048818 / 0.004250 (0.044567) | 0.049187 / 0.037052 (0.012135) | 0.247395 / 0.258489 (-0.011094) | 0.280301 / 0.293841 (-0.013540) | 0.023801 / 0.128546 (-0.104745) | 0.007653 / 0.075646 (-0.067994) | 0.204185 / 0.419271 (-0.215087) | 0.071251 / 0.043533 (0.027718) | 0.244409 / 0.255139 (-0.010730) | 0.262363 / 0.283200 (-0.020836) | 0.018631 / 0.141683 (-0.123052) | 1.110152 / 1.452155 (-0.342003) | 1.165093 / 1.492716 (-0.327624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099536 / 0.018006 (0.081530) | 0.309598 / 0.000490 (0.309109) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019213 / 0.037411 (-0.018198) | 0.069296 / 0.014526 (0.054770) | 0.074752 / 0.176557 (-0.101804) | 0.121314 / 0.737135 (-0.615822) | 0.081274 / 0.296338 (-0.215065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281345 / 0.215209 (0.066136) | 2.755435 / 2.077655 (0.677780) | 1.453358 / 1.504120 (-0.050762) | 1.328222 / 1.541195 (-0.212973) | 1.392281 / 1.468490 (-0.076209) | 0.410539 / 4.584777 (-4.174238) | 2.452072 / 3.745712 (-1.293640) | 2.777757 / 5.269862 (-2.492105) | 1.656719 / 4.565676 (-2.908958) | 0.046844 / 0.424275 (-0.377431) | 0.004785 / 0.007607 (-0.002822) | 0.336567 / 0.226044 (0.110522) | 3.317564 / 2.268929 (1.048635) | 1.830737 / 55.444624 (-53.613888) | 1.528464 / 6.876477 (-5.348013) | 1.620527 / 2.142072 (-0.521545) | 0.480662 / 4.805227 (-4.324565) | 0.100819 / 6.500664 (-6.399845) | 0.042501 / 0.075469 (-0.032968) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962593 / 1.841788 (-0.879195) | 12.508048 / 8.074308 (4.433740) | 11.117398 / 10.191392 (0.926006) | 0.131265 / 0.680424 (-0.549159) | 0.014469 / 0.534201 (-0.519732) | 0.271627 / 0.579283 (-0.307656) | 0.274966 / 0.434364 (-0.159398) | 0.313260 / 0.540337 (-0.227077) | 0.444741 / 1.386936 (-0.942195) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004974 / 0.011353 (-0.006379) | 0.003383 / 0.011008 (-0.007626) | 0.048792 / 0.038508 (0.010284) | 0.052821 / 0.023109 (0.029712) | 0.267123 / 0.275898 (-0.008775) | 0.293604 / 0.323480 (-0.029876) | 0.003968 / 0.007986 (-0.004018) | 0.002594 / 0.004328 (-0.001735) | 0.047690 / 0.004250 (0.043439) | 0.040236 / 0.037052 (0.003183) | 0.267805 / 0.258489 (0.009315) | 0.310543 / 0.293841 (0.016702) | 0.025707 / 0.128546 (-0.102839) | 0.008012 / 0.075646 (-0.067634) | 0.054460 / 0.419271 (-0.364812) | 0.033545 / 0.043533 (-0.009988) | 0.270166 / 0.255139 (0.015027) | 0.285965 / 0.283200 (0.002765) | 0.019391 / 0.141683 (-0.122292) | 1.144991 / 1.452155 (-0.307164) | 1.198491 / 1.492716 (-0.294225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094757 / 0.018006 (0.076751) | 0.306712 / 0.000490 (0.306222) | 0.000218 / 0.000200 (0.000018) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020995 / 0.037411 (-0.016417) | 0.070293 / 0.014526 (0.055767) | 0.081441 / 0.176557 (-0.095116) | 0.119538 / 0.737135 (-0.617597) | 0.081454 / 0.296338 (-0.214885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293451 / 0.215209 (0.078242) | 2.880378 / 2.077655 (0.802723) | 1.572547 / 1.504120 (0.068427) | 1.439172 / 1.541195 (-0.102023) | 1.506343 / 1.468490 (0.037853) | 0.402764 / 4.584777 (-4.182013) | 2.501341 / 3.745712 (-1.244371) | 2.538494 / 5.269862 (-2.731367) | 1.524306 / 4.565676 (-3.041371) | 0.046401 / 0.424275 (-0.377874) | 0.004781 / 0.007607 (-0.002826) | 0.349448 / 0.226044 (0.123404) | 3.416181 / 2.268929 (1.147252) | 1.964204 / 55.444624 (-53.480420) | 1.648564 / 6.876477 (-5.227912) | 1.675977 / 2.142072 (-0.466095) | 0.475717 / 4.805227 (-4.329511) | 0.098416 / 6.500664 (-6.402248) | 0.041212 / 0.075469 (-0.034257) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975928 / 1.841788 (-0.865860) | 12.066648 / 8.074308 (3.992340) | 10.943181 / 10.191392 (0.751789) | 0.149687 / 0.680424 (-0.530736) | 0.015107 / 0.534201 (-0.519094) | 0.268950 / 0.579283 (-0.310333) | 0.280419 / 0.434364 (-0.153945) | 0.305263 / 0.540337 (-0.235074) | 0.408486 / 1.386936 (-0.978450) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-10T14:56:43Z
| 2023-11-14T10:49:15Z
| 2023-11-14T10:43:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6398",
"merged_at": "2023-11-14T10:43:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6398"
}
|
Minor refactoring to remove redundant condition.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6398/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6398/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4409
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4409/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4409/events
|
https://github.com/huggingface/datasets/pull/4409
| 1,249,083,179
|
PR_kwDODunzps44fxiH
| 4,409
|
Update: add using pcm bytes (#4323)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YooSungHyun",
"id": 34292279,
"login": "YooSungHyun",
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YooSungHyun"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n",
"There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests/features/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests/features/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example",
"@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?",
"You can add your tests in this PR with the other changes you did",
"@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not",
"@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https://huggingface.co/api/datasets/lhoestq/_dummy?full=true) url is not found!\r\nPLZ, CHK!",
"@lhoestq\r\nhello????",
"@lhoestq \r\ntest_audio.py\r\nif we don`t use path in pcm, test-case need to be changed\r\nso, we check path just None",
"i'm merge branch already and `multiprocess` in `setup.py` but circleci error only win version\r\n\r\nhow can i fixed it?",
"@lhoestq thx for comment!\r\ntest_audio.py test complete. it runs sucessfully\r\nand, self.get(\"sampling_rate\") -> value.get(\"sampling_rate\") changed\r\n\r\nand, some comment is not agreed to me, plz check my sub comment!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-26T04:26:36Z
| 2022-07-07T13:27:29Z
| 2022-07-07T13:16:09Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"merged_at": "2022-07-07T13:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409"
}
|
first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4409/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/261
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/261/comments
|
https://api.github.com/repos/huggingface/datasets/issues/261/events
|
https://github.com/huggingface/datasets/issues/261
| 636,372,380
|
MDU6SXNzdWU2MzYzNzIzODA=
| 261
|
Downloading dataset error with pyarrow.lib.RecordBatch
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4",
"events_url": "https://api.github.com/users/cuent/events{/privacy}",
"followers_url": "https://api.github.com/users/cuent/followers",
"following_url": "https://api.github.com/users/cuent/following{/other_user}",
"gists_url": "https://api.github.com/users/cuent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cuent",
"id": 5248968,
"login": "cuent",
"node_id": "MDQ6VXNlcjUyNDg5Njg=",
"organizations_url": "https://api.github.com/users/cuent/orgs",
"received_events_url": "https://api.github.com/users/cuent/received_events",
"repos_url": "https://api.github.com/users/cuent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cuent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cuent"
}
|
[] |
closed
| false
| null |
[] | null |
[
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your message.",
"Yeah, that worked! Thanks :) "
] | 2020-06-10T16:04:19Z
| 2020-06-11T14:35:12Z
| 2020-06-11T14:35:12Z
|
NONE
| null | null | null |
I am trying to download `sentiment140` and I have the following error
```
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
418 verify_infos = not save_infos and not ignore_verifications
419 self._download_and_prepare(
--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
422 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
472 try:
473 # Prepare split will record examples associated to the split
--> 474 self._prepare_split(split_generator, **prepare_split_kwargs)
475 except OSError:
476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
653 example = self.info.features.encode_example(record)
--> 654 writer.write(example)
655 num_examples, num_bytes = writer.finalize()
656
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size)
143 self._build_writer(pa_table=pa.Table.from_pydict(example))
144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size:
--> 145 self.write_on_file()
146
147 def write_batch(
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self)
127 else:
128 # All good
--> 129 self._write_array_on_file(pa_array)
130 self.current_rows = []
131
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array)
96 def _write_array_on_file(self, pa_array):
97 """Write a PyArrow Array"""
---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array)
99 self._num_bytes += pa_array.nbytes
100 self.pa_writer.write_batch(pa_batch)
AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
```
I installed the last version and ran the following command:
```python
import nlp
sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content')
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/261/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3474
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3474/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3474/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3474/events
|
https://github.com/huggingface/datasets/pull/3474
| 1,086,945,384
|
PR_kwDODunzps4wMMt0
| 3,474
|
Decode images when iterating
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-22T15:34:49Z
| 2023-09-24T09:54:04Z
| 2021-12-28T16:08:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3474.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3474",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3474.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3474"
}
|
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned.
This PR enables image decoding in `Dataset.__iter__`
Close https://github.com/huggingface/datasets/issues/3473
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3474/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3474/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3506/events
|
https://github.com/huggingface/datasets/pull/3506
| 1,091,166,595
|
PR_kwDODunzps4wZpot
| 3,506
|
Allows DatasetDict.filter to have batching option
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-30T15:22:22Z
| 2022-01-04T10:24:28Z
| 2022-01-04T10:24:27Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3506.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3506",
"merged_at": "2022-01-04T10:24:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3506.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3506"
}
|
- Related to: #3244
- Fixes: #3503
We extends `.filter( ... batched: bool)` support to DatasetDict.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3506/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3506/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5182
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5182/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5182/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5182/events
|
https://github.com/huggingface/datasets/issues/5182
| 1,431,029,547
|
I_kwDODunzps5VS8cr
| 5,182
|
Add notebook / other resource links to the task-specific data loading guides
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
}
] | null |
[
"Yea this would be great! We would need an object detection tutorial notebook too if it doesn't already exist there. ",
"There is one: https://huggingface.co/docs/datasets/object_detection.\r\n\r\nI will start the work. "
] | 2022-11-01T07:57:26Z
| 2022-11-03T01:49:57Z
| 2022-11-03T01:49:57Z
|
MEMBER
| null | null | null |
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model?
For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb.
Applies to https://huggingface.co/docs/datasets/object_detection as well.
Cc: @osanseviero @nateraw
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5182/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5182/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1550
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1550/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1550/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1550/events
|
https://github.com/huggingface/datasets/pull/1550
| 765,620,925
|
MDExOlB1bGxSZXF1ZXN0NTM5MDEwMDY1
| 1,550
|
Add offensive langauge dravidian dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7421838?v=4",
"events_url": "https://api.github.com/users/jamespaultg/events{/privacy}",
"followers_url": "https://api.github.com/users/jamespaultg/followers",
"following_url": "https://api.github.com/users/jamespaultg/following{/other_user}",
"gists_url": "https://api.github.com/users/jamespaultg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jamespaultg",
"id": 7421838,
"login": "jamespaultg",
"node_id": "MDQ6VXNlcjc0MjE4Mzg=",
"organizations_url": "https://api.github.com/users/jamespaultg/orgs",
"received_events_url": "https://api.github.com/users/jamespaultg/received_events",
"repos_url": "https://api.github.com/users/jamespaultg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jamespaultg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamespaultg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jamespaultg"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks much!"
] | 2020-12-13T19:54:19Z
| 2020-12-18T15:52:49Z
| 2020-12-18T14:25:30Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1550",
"merged_at": "2020-12-18T14:25:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1550"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1550/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1550/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/4008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4008/events
|
https://github.com/huggingface/datasets/pull/4008
| 1,179,591,068
|
PR_kwDODunzps409Ixp
| 4,008
|
Support streaming daily_dialog dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Yay! I love this dataset!"
] | 2022-03-24T14:23:23Z
| 2022-03-24T15:29:01Z
| 2022-03-24T14:46:58Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"merged_at": "2022-03-24T14:46:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4008/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1168
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1168/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1168/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1168/events
|
https://github.com/huggingface/datasets/pull/1168
| 757,740,780
|
MDExOlB1bGxSZXF1ZXN0NTMzMDYzNjgy
| 1,168
|
Add Naver sentiment movie corpus
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaketae",
"id": 25360440,
"login": "jaketae",
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"repos_url": "https://api.github.com/users/jaketae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaketae"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closed via #1252 "
] | 2020-12-05T17:25:23Z
| 2020-12-07T13:34:09Z
| 2020-12-07T13:34:09Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1168",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1168"
}
|
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1168/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1168/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4024
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4024/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4024/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4024/events
|
https://github.com/huggingface/datasets/pull/4024
| 1,180,951,817
|
PR_kwDODunzps41Bp3V
| 4,024
|
Doc: image_process small tip
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"id": 15908060,
"login": "FrancescoSaverioZuppichini",
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This tip is unnecessary, i.e., Pillow will already be installed since the `Image` feature requires it for encoding and decoding. Thanks anyway.\r\n\r\ncc @stevhliu I've noticed we are missing the installation section in the doc (`pip install datasets[vision]`). I can add it myself."
] | 2022-03-25T15:44:32Z
| 2022-03-31T15:35:35Z
| 2022-03-31T15:30:20Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4024",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4024"
}
|
I've added a small tip in the `image_process` doc
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4024/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4024/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5411
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5411/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5411/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5411/events
|
https://github.com/huggingface/datasets/pull/5411
| 1,523,297,786
|
PR_kwDODunzps5G23-T
| 5,411
|
Update docs of S3 filesystem with async aiobotocore
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5677912?v=4",
"events_url": "https://api.github.com/users/maheshpec/events{/privacy}",
"followers_url": "https://api.github.com/users/maheshpec/followers",
"following_url": "https://api.github.com/users/maheshpec/following{/other_user}",
"gists_url": "https://api.github.com/users/maheshpec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maheshpec",
"id": 5677912,
"login": "maheshpec",
"node_id": "MDQ6VXNlcjU2Nzc5MTI=",
"organizations_url": "https://api.github.com/users/maheshpec/orgs",
"received_events_url": "https://api.github.com/users/maheshpec/received_events",
"repos_url": "https://api.github.com/users/maheshpec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maheshpec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maheshpec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maheshpec"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008587 / 0.011353 (-0.002766) | 0.004613 / 0.011008 (-0.006395) | 0.100446 / 0.038508 (0.061938) | 0.029606 / 0.023109 (0.006497) | 0.302102 / 0.275898 (0.026204) | 0.357364 / 0.323480 (0.033884) | 0.007031 / 0.007986 (-0.000954) | 0.003593 / 0.004328 (-0.000735) | 0.078110 / 0.004250 (0.073860) | 0.035495 / 0.037052 (-0.001557) | 0.312522 / 0.258489 (0.054033) | 0.349336 / 0.293841 (0.055495) | 0.033719 / 0.128546 (-0.094827) | 0.011449 / 0.075646 (-0.064197) | 0.321760 / 0.419271 (-0.097512) | 0.043697 / 0.043533 (0.000165) | 0.304476 / 0.255139 (0.049337) | 0.333126 / 0.283200 (0.049926) | 0.092756 / 0.141683 (-0.048927) | 1.506734 / 1.452155 (0.054579) | 1.547381 / 1.492716 (0.054664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178177 / 0.018006 (0.160171) | 0.427814 / 0.000490 (0.427324) | 0.002505 / 0.000200 (0.002305) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023039 / 0.037411 (-0.014372) | 0.097113 / 0.014526 (0.082587) | 0.105014 / 0.176557 (-0.071543) | 0.141185 / 0.737135 (-0.595950) | 0.108843 / 0.296338 (-0.187495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424148 / 0.215209 (0.208939) | 4.247599 / 2.077655 (2.169944) | 2.130720 / 1.504120 (0.626600) | 1.916349 / 1.541195 (0.375154) | 1.831515 / 1.468490 (0.363025) | 0.688301 / 4.584777 (-3.896476) | 3.381749 / 3.745712 (-0.363963) | 2.900045 / 5.269862 (-2.369817) | 1.576248 / 4.565676 (-2.989428) | 0.082354 / 0.424275 (-0.341921) | 0.012200 / 0.007607 (0.004593) | 0.525753 / 0.226044 (0.299709) | 5.277672 / 2.268929 (3.008743) | 2.603870 / 55.444624 (-52.840754) | 2.296203 / 6.876477 (-4.580273) | 2.308014 / 2.142072 (0.165942) | 0.809056 / 4.805227 (-3.996171) | 0.148122 / 6.500664 (-6.352542) | 0.066097 / 0.075469 (-0.009372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.214059 / 1.841788 (-0.627728) | 13.671332 / 8.074308 (5.597024) | 13.694554 / 10.191392 (3.503162) | 0.151454 / 0.680424 (-0.528970) | 0.028514 / 0.534201 (-0.505687) | 0.391480 / 0.579283 (-0.187804) | 0.404499 / 0.434364 (-0.029865) | 0.458111 / 0.540337 (-0.082226) | 0.539454 / 1.386936 (-0.847482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006795 / 0.011353 (-0.004558) | 0.004463 / 0.011008 (-0.006545) | 0.099542 / 0.038508 (0.061034) | 0.027588 / 0.023109 (0.004479) | 0.423023 / 0.275898 (0.147125) | 0.458459 / 0.323480 (0.134979) | 0.004981 / 0.007986 (-0.003005) | 0.003321 / 0.004328 (-0.001008) | 0.075727 / 0.004250 (0.071477) | 0.040541 / 0.037052 (0.003489) | 0.423724 / 0.258489 (0.165235) | 0.468334 / 0.293841 (0.174493) | 0.031732 / 0.128546 (-0.096814) | 0.011478 / 0.075646 (-0.064168) | 0.319807 / 0.419271 (-0.099465) | 0.041215 / 0.043533 (-0.002318) | 0.423060 / 0.255139 (0.167921) | 0.446157 / 0.283200 (0.162957) | 0.088884 / 0.141683 (-0.052799) | 1.553404 / 1.452155 (0.101250) | 1.607797 / 1.492716 (0.115080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208314 / 0.018006 (0.190307) | 0.411627 / 0.000490 (0.411137) | 0.002416 / 0.000200 (0.002216) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024641 / 0.037411 (-0.012770) | 0.101047 / 0.014526 (0.086521) | 0.108410 / 0.176557 (-0.068147) | 0.142860 / 0.737135 (-0.594276) | 0.112486 / 0.296338 (-0.183852) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485520 / 0.215209 (0.270311) | 4.864009 / 2.077655 (2.786355) | 2.541865 / 1.504120 (1.037745) | 2.339569 / 1.541195 (0.798374) | 2.378258 / 1.468490 (0.909768) | 0.698000 / 4.584777 (-3.886777) | 3.343137 / 3.745712 (-0.402575) | 1.842264 / 5.269862 (-3.427597) | 1.154707 / 4.565676 (-3.410969) | 0.082826 / 0.424275 (-0.341449) | 0.012379 / 0.007607 (0.004772) | 0.583335 / 0.226044 (0.357291) | 5.885934 / 2.268929 (3.617006) | 2.997769 / 55.444624 (-52.446856) | 2.653681 / 6.876477 (-4.222796) | 2.761656 / 2.142072 (0.619583) | 0.799883 / 4.805227 (-4.005344) | 0.151398 / 6.500664 (-6.349266) | 0.067445 / 0.075469 (-0.008024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292009 / 1.841788 (-0.549779) | 13.976180 / 8.074308 (5.901872) | 14.219469 / 10.191392 (4.028077) | 0.127810 / 0.680424 (-0.552614) | 0.016919 / 0.534201 (-0.517282) | 0.376401 / 0.579283 (-0.202882) | 0.388563 / 0.434364 (-0.045801) | 0.444904 / 0.540337 (-0.095433) | 0.532290 / 1.386936 (-0.854646) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-06T23:19:17Z
| 2023-01-18T11:18:59Z
| 2023-01-18T11:12:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"merged_at": "2023-01-18T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411"
}
|
[s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5411/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5411/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2071
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2071/events
|
https://github.com/huggingface/datasets/issues/2071
| 833,950,824
|
MDU6SXNzdWU4MzM5NTA4MjQ=
| 2,071
|
Multiprocessing is slower than single process
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"dupe of #1992"
] | 2021-03-17T16:08:58Z
| 2021-03-18T09:10:23Z
| 2021-03-18T09:10:23Z
|
CONTRIBUTOR
| null | null | null |
```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
elapsed = time.time() - now
print(elapsed)
```
Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/615/events
|
https://github.com/huggingface/datasets/issues/615
| 699,410,773
|
MDU6SXNzdWU2OTk0MTA3NzM=
| 615
|
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_indices` is not None, this breaks indexing by slice. E.g. `dset.shuffle()[:1]` breaks.\r\n\r\nLuckily so far I haven't seen `_indices.column(0).take` break, which means it doesn't break `select` or anything like that which is where the speed really matters, it's just `_getitem`. So I'm currently working around it by just doing the arrow v0 method in `_getitem`:\r\n```\r\n#if PYARROW_V0:\r\ndata_subset = pa.concat_tables(\r\n self._data.slice(indices_array[i].as_py(), 1) for i in range(len(indices_array))\r\n)\r\n#else:\r\n #data_subset = self._data.take(indices_array)\r\n```",
"Let me know if you meet other offset overflow issues @joeddav ",
"Will this problem be solved in newer version?",
"This specific issue has been fixed in https://github.com/huggingface/datasets/pull/645\r\n\r\nIf you still have this error, could you open a new issue and explain how to reproduce the error ?",
"same error here in version 2.1.0",
"Facing the same issue. \r\nSteps to reproduce: (dataset is a few GB big so try in colab maybe)\r\nDatasets version - 2.11.0\r\n```\r\nimport datasets\r\nimport re\r\n\r\nds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train')\r\n\r\ndef get_text_caption(example):\r\n regex_pattern = r'\\s\\d+x\\d+|,\\sLQ|,\\sgrid|\\.\\w+$'\r\n example['text_caption'] = re.sub(regex_pattern, '', example['picture_text'])\r\n return example\r\n\r\nds = ds.map(get_text_caption)\r\n```\r\n\r\nI am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up.",
"Got this error on a very large data set (900m rows, 35 cols) performing a similar batch map operation.",
"There is a solution that has been proposed here: https://github.com/huggingface/datasets/issues/5783",
"@lhoestq I ran into this problem with load_dataset. What should I do\r\n",
"What version of `datasets` are you using ? Feel free to open a new issue with some details (e.g. what dataset you loaded, what code you ran etc)",
"@lhoestq It's been solved,thanks"
] | 2020-09-11T14:50:38Z
| 2023-09-21T07:59:23Z
| 2020-09-19T16:46:31Z
|
MEMBER
| null | null | null |
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-381aedc9811b> in <module>
----> 1 wikipedia[[0]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1069 format_columns=self._format_columns,
1070 output_all_columns=self._output_all_columns,
-> 1071 format_kwargs=self._format_kwargs,
1072 )
1073
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1037 )
1038 else:
-> 1039 data_subset = self._data.take(indices_array)
1040
1041 if format_type is not None:
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck)
266 """
267 options = TakeOptions(boundscheck)
--> 268 return call_function('take', [data, indices], options)
269
270
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: offset overflow while concatenating arrays
```
It seems to work fine with small datasets or with pyarrow 0.17.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/615/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5872/events
|
https://github.com/huggingface/datasets/pull/5872
| 1,713,174,662
|
PR_kwDODunzps5QrQ5o
| 5,872
|
Fix infer module for uppercase extensions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007049 / 0.011353 (-0.004304) | 0.005034 / 0.011008 (-0.005974) | 0.097737 / 0.038508 (0.059229) | 0.033280 / 0.023109 (0.010170) | 0.301017 / 0.275898 (0.025119) | 0.336593 / 0.323480 (0.013113) | 0.005567 / 0.007986 (-0.002419) | 0.005384 / 0.004328 (0.001056) | 0.072980 / 0.004250 (0.068730) | 0.045030 / 0.037052 (0.007978) | 0.303280 / 0.258489 (0.044791) | 0.367528 / 0.293841 (0.073687) | 0.034131 / 0.128546 (-0.094415) | 0.012118 / 0.075646 (-0.063528) | 0.331677 / 0.419271 (-0.087594) | 0.049211 / 0.043533 (0.005678) | 0.297535 / 0.255139 (0.042396) | 0.318136 / 0.283200 (0.034936) | 0.101574 / 0.141683 (-0.040109) | 1.472769 / 1.452155 (0.020615) | 1.541724 / 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014646 / 0.018006 (-0.003360) | 0.439050 / 0.000490 (0.438560) | 0.008575 / 0.000200 (0.008375) | 0.000297 / 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027591 / 0.037411 (-0.009820) | 0.111639 / 0.014526 (0.097113) | 0.117098 / 0.176557 (-0.059458) | 0.173281 / 0.737135 (-0.563855) | 0.123197 / 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397507 / 0.215209 (0.182298) | 3.971457 / 2.077655 (1.893803) | 1.781158 / 1.504120 (0.277038) | 1.590419 / 1.541195 (0.049224) | 1.716374 / 1.468490 (0.247884) | 0.687150 / 4.584777 (-3.897627) | 3.691009 / 3.745712 (-0.054703) | 2.050900 / 5.269862 (-3.218961) | 1.304893 / 4.565676 (-3.260784) | 0.084507 / 0.424275 (-0.339768) | 0.012231 / 0.007607 (0.004624) | 0.493033 / 0.226044 (0.266988) | 4.929957 / 2.268929 (2.661028) | 2.209069 / 55.444624 (-53.235555) | 1.885992 / 6.876477 (-4.990485) | 2.007004 / 2.142072 (-0.135069) | 0.827265 / 4.805227 (-3.977963) | 0.168225 / 6.500664 (-6.332439) | 0.064988 / 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182341 / 1.841788 (-0.659447) | 14.691983 / 8.074308 (6.617674) | 14.350720 / 10.191392 (4.159328) | 0.164307 / 0.680424 (-0.516117) | 0.017480 / 0.534201 (-0.516720) | 0.421843 / 0.579283 (-0.157441) | 0.417481 / 0.434364 (-0.016883) | 0.496587 / 0.540337 (-0.043751) | 0.581208 / 1.386936 (-0.805728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007070 / 0.011353 (-0.004283) | 0.005083 / 0.011008 (-0.005926) | 0.075009 / 0.038508 (0.036500) | 0.032343 / 0.023109 (0.009234) | 0.366788 / 0.275898 (0.090890) | 0.392273 / 0.323480 (0.068794) | 0.005512 / 0.007986 (-0.002474) | 0.003999 / 0.004328 (-0.000329) | 0.073743 / 0.004250 (0.069492) | 0.046203 / 0.037052 (0.009151) | 0.367874 / 0.258489 (0.109385) | 0.409154 / 0.293841 (0.115313) | 0.035227 / 0.128546 (-0.093319) | 0.012223 / 0.075646 (-0.063424) | 0.087149 / 0.419271 (-0.332122) | 0.045648 / 0.043533 (0.002115) | 0.362414 / 0.255139 (0.107275) | 0.379970 / 0.283200 (0.096770) | 0.100631 / 0.141683 (-0.041052) | 1.439733 / 1.452155 (-0.012422) | 1.506266 / 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227071 / 0.018006 (0.209065) | 0.451243 / 0.000490 (0.450753) | 0.000406 / 0.000200 (0.000206) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028952 / 0.037411 (-0.008459) | 0.111934 / 0.014526 (0.097408) | 0.124080 / 0.176557 (-0.052477) | 0.174022 / 0.737135 (-0.563113) | 0.126811 / 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436423 / 0.215209 (0.221214) | 4.331959 / 2.077655 (2.254304) | 2.111914 / 1.504120 (0.607794) | 1.921338 / 1.541195 (0.380143) | 1.994425 / 1.468490 (0.525935) | 0.699164 / 4.584777 (-3.885613) | 3.722143 / 3.745712 (-0.023569) | 3.516538 / 5.269862 (-1.753323) | 1.867245 / 4.565676 (-2.698431) | 0.085923 / 0.424275 (-0.338352) | 0.012059 / 0.007607 (0.004452) | 0.586147 / 0.226044 (0.360102) | 5.395823 / 2.268929 (3.126894) | 2.594430 / 55.444624 (-52.850194) | 2.275021 / 6.876477 (-4.601456) | 2.347810 / 2.142072 (0.205737) | 0.835118 / 4.805227 (-3.970109) | 0.167089 / 6.500664 (-6.333575) | 0.064893 / 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291423 / 1.841788 (-0.550365) | 14.992696 / 8.074308 (6.918388) | 13.307842 / 10.191392 (3.116450) | 0.163799 / 0.680424 (-0.516625) | 0.017315 / 0.534201 (-0.516886) | 0.461319 / 0.579283 (-0.117965) | 0.430474 / 0.434364 (-0.003889) | 0.568115 / 0.540337 (0.027777) | 0.647909 / 1.386936 (-0.739027) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-17T05:56:45Z
| 2023-05-17T14:26:59Z
| 2023-05-17T14:19:18Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5872.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5872",
"merged_at": "2023-05-17T14:19:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5872.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5872"
}
|
Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5872/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/811
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/811/comments
|
https://api.github.com/repos/huggingface/datasets/issues/811/events
|
https://github.com/huggingface/datasets/issues/811
| 738,280,132
|
MDU6SXNzdWU3MzgyODAxMzI=
| 811
|
nlp viewer error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jc-hou",
"id": 30210529,
"login": "jc-hou",
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jc-hou"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n\r\n",
"Is this the problem of my local computer or ??",
"Related to:\r\n- #673"
] | 2020-11-07T17:08:58Z
| 2022-02-15T10:51:44Z
| 2022-02-14T15:24:20Z
|
NONE
| null | null | null |
Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/811/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/372
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/372/comments
|
https://api.github.com/repos/huggingface/datasets/issues/372/events
|
https://github.com/huggingface/datasets/pull/372
| 654,774,420
|
MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4
| 372
|
Make the json script more flexible
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-07-10T13:15:15Z
| 2020-07-10T14:52:07Z
| 2020-07-10T14:52:06Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"merged_at": "2020-07-10T14:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372"
}
|
Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.
E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:
```python
from nlp import load_dataset
dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data')
```
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/372/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6304
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6304/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6304/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6304/events
|
https://github.com/huggingface/datasets/pull/6304
| 1,945,913,521
|
PR_kwDODunzps5c7-4q
| 6,304
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smty2018",
"id": 74114936,
"login": "smty2018",
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"repos_url": "https://api.github.com/users/smty2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smty2018"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006678 / 0.011353 (-0.004675) | 0.004013 / 0.011008 (-0.006995) | 0.083372 / 0.038508 (0.044864) | 0.070339 / 0.023109 (0.047230) | 0.339026 / 0.275898 (0.063128) | 0.370945 / 0.323480 (0.047465) | 0.004050 / 0.007986 (-0.003935) | 0.003283 / 0.004328 (-0.001046) | 0.064956 / 0.004250 (0.060705) | 0.055427 / 0.037052 (0.018374) | 0.341787 / 0.258489 (0.083297) | 0.385030 / 0.293841 (0.091189) | 0.031791 / 0.128546 (-0.096755) | 0.008511 / 0.075646 (-0.067135) | 0.286538 / 0.419271 (-0.132734) | 0.052893 / 0.043533 (0.009360) | 0.338522 / 0.255139 (0.083383) | 0.371821 / 0.283200 (0.088622) | 0.023731 / 0.141683 (-0.117951) | 1.485857 / 1.452155 (0.033702) | 1.515218 / 1.492716 (0.022502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232798 / 0.018006 (0.214792) | 0.446783 / 0.000490 (0.446293) | 0.007395 / 0.000200 (0.007195) | 0.000385 / 0.000054 (0.000330) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028866 / 0.037411 (-0.008545) | 0.081653 / 0.014526 (0.067127) | 0.094457 / 0.176557 (-0.082099) | 0.151761 / 0.737135 (-0.585375) | 0.095579 / 0.296338 (-0.200760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379926 / 0.215209 (0.164717) | 3.801839 / 2.077655 (1.724184) | 1.830302 / 1.504120 (0.326182) | 1.686912 / 1.541195 (0.145717) | 1.803418 / 1.468490 (0.334928) | 0.484431 / 4.584777 (-4.100346) | 3.592748 / 3.745712 (-0.152964) | 3.402578 / 5.269862 (-1.867284) | 2.043434 / 4.565676 (-2.522242) | 0.057274 / 0.424275 (-0.367001) | 0.007211 / 0.007607 (-0.000396) | 0.462611 / 0.226044 (0.236567) | 4.610703 / 2.268929 (2.341775) | 2.397668 / 55.444624 (-53.046956) | 2.149983 / 6.876477 (-4.726494) | 2.199100 / 2.142072 (0.057028) | 0.575883 / 4.805227 (-4.229344) | 0.133421 / 6.500664 (-6.367243) | 0.061168 / 0.075469 (-0.014301) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.246792 / 1.841788 (-0.594995) | 18.974385 / 8.074308 (10.900077) | 14.268859 / 10.191392 (4.077467) | 0.166340 / 0.680424 (-0.514084) | 0.018227 / 0.534201 (-0.515974) | 0.389646 / 0.579283 (-0.189637) | 0.418780 / 0.434364 (-0.015584) | 0.458063 / 0.540337 (-0.082275) | 0.635156 / 1.386936 (-0.751780) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006613 / 0.011353 (-0.004740) | 0.003977 / 0.011008 (-0.007031) | 0.064609 / 0.038508 (0.026101) | 0.070418 / 0.023109 (0.047308) | 0.395814 / 0.275898 (0.119916) | 0.424803 / 0.323480 (0.101323) | 0.005342 / 0.007986 (-0.002644) | 0.003252 / 0.004328 (-0.001076) | 0.065177 / 0.004250 (0.060927) | 0.055299 / 0.037052 (0.018247) | 0.403983 / 0.258489 (0.145494) | 0.438522 / 0.293841 (0.144681) | 0.032336 / 0.128546 (-0.096210) | 0.008524 / 0.075646 (-0.067122) | 0.071645 / 0.419271 (-0.347627) | 0.048137 / 0.043533 (0.004604) | 0.395170 / 0.255139 (0.140031) | 0.421727 / 0.283200 (0.138528) | 0.023028 / 0.141683 (-0.118655) | 1.500739 / 1.452155 (0.048584) | 1.568887 / 1.492716 (0.076170) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227542 / 0.018006 (0.209536) | 0.447882 / 0.000490 (0.447393) | 0.005416 / 0.000200 (0.005216) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032954 / 0.037411 (-0.004457) | 0.091994 / 0.014526 (0.077468) | 0.105957 / 0.176557 (-0.070600) | 0.158728 / 0.737135 (-0.578407) | 0.104734 / 0.296338 (-0.191605) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436275 / 0.215209 (0.221066) | 4.344864 / 2.077655 (2.267209) | 2.304949 / 1.504120 (0.800829) | 2.123963 / 1.541195 (0.582768) | 2.189099 / 1.468490 (0.720609) | 0.492662 / 4.584777 (-4.092115) | 3.633662 / 3.745712 (-0.112051) | 3.251338 / 5.269862 (-2.018524) | 2.061378 / 4.565676 (-2.504299) | 0.058100 / 0.424275 (-0.366175) | 0.007311 / 0.007607 (-0.000297) | 0.516227 / 0.226044 (0.290183) | 5.184228 / 2.268929 (2.915300) | 2.780343 / 55.444624 (-52.664281) | 2.423428 / 6.876477 (-4.453048) | 2.617371 / 2.142072 (0.475298) | 0.590455 / 4.805227 (-4.214772) | 0.131728 / 6.500664 (-6.368936) | 0.059994 / 0.075469 (-0.015475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354920 / 1.841788 (-0.486868) | 19.427822 / 8.074308 (11.353514) | 15.289037 / 10.191392 (5.097645) | 0.170437 / 0.680424 (-0.509987) | 0.020242 / 0.534201 (-0.513959) | 0.394921 / 0.579283 (-0.184362) | 0.426447 / 0.434364 (-0.007917) | 0.468321 / 0.540337 (-0.072017) | 0.671052 / 1.386936 (-0.715884) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-16T19:10:39Z
| 2023-10-17T15:13:37Z
| 2023-10-17T15:04:52Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6304",
"merged_at": "2023-10-17T15:04:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6304"
}
|
Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6304/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6304/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1267
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1267/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1267/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1267/events
|
https://github.com/huggingface/datasets/pull/1267
| 758,826,568
|
MDExOlB1bGxSZXF1ZXN0NTMzOTMwNzU2
| 1,267
|
Has part
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4",
"events_url": "https://api.github.com/users/jeromeku/events{/privacy}",
"followers_url": "https://api.github.com/users/jeromeku/followers",
"following_url": "https://api.github.com/users/jeromeku/following{/other_user}",
"gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jeromeku",
"id": 2455711,
"login": "jeromeku",
"node_id": "MDQ6VXNlcjI0NTU3MTE=",
"organizations_url": "https://api.github.com/users/jeromeku/orgs",
"received_events_url": "https://api.github.com/users/jeromeku/received_events",
"repos_url": "https://api.github.com/users/jeromeku/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jeromeku"
}
|
[] |
closed
| false
| null |
[] | null |
[
"merging since the CI is fixed on master"
] | 2020-12-07T20:32:03Z
| 2020-12-11T18:25:42Z
| 2020-12-11T18:25:42Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1267.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1267",
"merged_at": "2020-12-11T18:25:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1267.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1267"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1267/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1267/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/3860
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3860/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3860/events
|
https://github.com/huggingface/datasets/pull/3860
| 1,162,623,329
|
PR_kwDODunzps40GpzZ
| 3,860
|
Small doc fixes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.",
"There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping)) directives in our codebase, so maybe we can remove those as well as part of this PR."
] | 2022-03-08T12:55:39Z
| 2022-03-08T17:37:13Z
| 2022-03-08T17:37:13Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3860",
"merged_at": "2022-03-08T17:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3860"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3860/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5583/events
|
https://github.com/huggingface/datasets/pull/5583
| 1,601,583,625
|
PR_kwDODunzps5K2mIz
| 5,583
|
Do no write index by default when exporting a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009044 / 0.011353 (-0.002309) | 0.004244 / 0.011008 (-0.006765) | 0.106705 / 0.038508 (0.068197) | 0.029779 / 0.023109 (0.006670) | 0.289684 / 0.275898 (0.013786) | 0.347100 / 0.323480 (0.023620) | 0.007071 / 0.007986 (-0.000915) | 0.003734 / 0.004328 (-0.000595) | 0.077971 / 0.004250 (0.073720) | 0.035323 / 0.037052 (-0.001730) | 0.334520 / 0.258489 (0.076031) | 0.375804 / 0.293841 (0.081964) | 0.049211 / 0.128546 (-0.079335) | 0.016992 / 0.075646 (-0.058654) | 0.337208 / 0.419271 (-0.082064) | 0.053700 / 0.043533 (0.010167) | 0.295750 / 0.255139 (0.040611) | 0.330157 / 0.283200 (0.046958) | 0.097017 / 0.141683 (-0.044666) | 1.379353 / 1.452155 (-0.072802) | 1.402670 / 1.492716 (-0.090047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012685 / 0.018006 (-0.005321) | 0.474541 / 0.000490 (0.474051) | 0.006752 / 0.000200 (0.006552) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.092507 / 0.014526 (0.077982) | 0.100275 / 0.176557 (-0.076281) | 0.180359 / 0.737135 (-0.556777) | 0.104312 / 0.296338 (-0.192026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456558 / 0.215209 (0.241349) | 4.786667 / 2.077655 (2.709012) | 1.873169 / 1.504120 (0.369050) | 1.640935 / 1.541195 (0.099741) | 1.614543 / 1.468490 (0.146053) | 0.936144 / 4.584777 (-3.648633) | 4.699886 / 3.745712 (0.954174) | 2.398545 / 5.269862 (-2.871317) | 1.642808 / 4.565676 (-2.922868) | 0.124803 / 0.424275 (-0.299472) | 0.011848 / 0.007607 (0.004241) | 0.631684 / 0.226044 (0.405639) | 6.096052 / 2.268929 (3.827124) | 2.463052 / 55.444624 (-52.981572) | 1.928551 / 6.876477 (-4.947926) | 1.927790 / 2.142072 (-0.214283) | 1.098912 / 4.805227 (-3.706315) | 0.196343 / 6.500664 (-6.304321) | 0.063296 / 0.075469 (-0.012173) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255032 / 1.841788 (-0.586755) | 13.853623 / 8.074308 (5.779315) | 16.303280 / 10.191392 (6.111888) | 0.227287 / 0.680424 (-0.453137) | 0.037527 / 0.534201 (-0.496674) | 0.449345 / 0.579283 (-0.129938) | 0.522054 / 0.434364 (0.087690) | 0.552848 / 0.540337 (0.012511) | 0.642994 / 1.386936 (-0.743942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008470 / 0.011353 (-0.002883) | 0.005167 / 0.011008 (-0.005841) | 0.077794 / 0.038508 (0.039286) | 0.029228 / 0.023109 (0.006119) | 0.340828 / 0.275898 (0.064930) | 0.400170 / 0.323480 (0.076691) | 0.005485 / 0.007986 (-0.002500) | 0.003854 / 0.004328 (-0.000475) | 0.077597 / 0.004250 (0.073346) | 0.036519 / 0.037052 (-0.000533) | 0.335522 / 0.258489 (0.077033) | 0.412622 / 0.293841 (0.118781) | 0.044587 / 0.128546 (-0.083959) | 0.016024 / 0.075646 (-0.059623) | 0.092312 / 0.419271 (-0.326960) | 0.055660 / 0.043533 (0.012127) | 0.343140 / 0.255139 (0.088001) | 0.386403 / 0.283200 (0.103203) | 0.098634 / 0.141683 (-0.043049) | 1.326126 / 1.452155 (-0.126029) | 1.430316 / 1.492716 (-0.062400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222807 / 0.018006 (0.204801) | 0.473622 / 0.000490 (0.473132) | 0.000376 / 0.000200 (0.000176) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024599 / 0.037411 (-0.012813) | 0.100743 / 0.014526 (0.086217) | 0.112086 / 0.176557 (-0.064471) | 0.198294 / 0.737135 (-0.538842) | 0.111210 / 0.296338 (-0.185129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494120 / 0.215209 (0.278911) | 5.117958 / 2.077655 (3.040303) | 2.305131 / 1.504120 (0.801011) | 2.015591 / 1.541195 (0.474396) | 2.027284 / 1.468490 (0.558794) | 1.014241 / 4.584777 (-3.570536) | 4.738836 / 3.745712 (0.993124) | 2.519718 / 5.269862 (-2.750143) | 1.706379 / 4.565676 (-2.859298) | 0.122452 / 0.424275 (-0.301824) | 0.011500 / 0.007607 (0.003893) | 0.632864 / 0.226044 (0.406820) | 6.295457 / 2.268929 (4.026529) | 2.824897 / 55.444624 (-52.619727) | 2.324359 / 6.876477 (-4.552117) | 2.281046 / 2.142072 (0.138974) | 1.173570 / 4.805227 (-3.631657) | 0.197195 / 6.500664 (-6.303469) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273224 / 1.841788 (-0.568563) | 14.531155 / 8.074308 (6.456847) | 15.892176 / 10.191392 (5.700784) | 0.208051 / 0.680424 (-0.472373) | 0.023119 / 0.534201 (-0.511082) | 0.422317 / 0.579283 (-0.156966) | 0.519946 / 0.434364 (0.085582) | 0.544517 / 0.540337 (0.004179) | 0.605955 / 1.386936 (-0.780981) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010806 / 0.011353 (-0.000547) | 0.005631 / 0.011008 (-0.005378) | 0.113166 / 0.038508 (0.074657) | 0.042980 / 0.023109 (0.019871) | 0.344856 / 0.275898 (0.068958) | 0.404417 / 0.323480 (0.080938) | 0.012222 / 0.007986 (0.004236) | 0.004470 / 0.004328 (0.000141) | 0.088072 / 0.004250 (0.083822) | 0.049815 / 0.037052 (0.012763) | 0.366532 / 0.258489 (0.108043) | 0.392558 / 0.293841 (0.098717) | 0.045411 / 0.128546 (-0.083135) | 0.014118 / 0.075646 (-0.061529) | 0.392894 / 0.419271 (-0.026378) | 0.067713 / 0.043533 (0.024181) | 0.353013 / 0.255139 (0.097874) | 0.378375 / 0.283200 (0.095175) | 0.123686 / 0.141683 (-0.017996) | 1.665272 / 1.452155 (0.213118) | 1.748383 / 1.492716 (0.255667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011672 / 0.018006 (-0.006335) | 0.481667 / 0.000490 (0.481178) | 0.003644 / 0.000200 (0.003444) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030436 / 0.037411 (-0.006976) | 0.122577 / 0.014526 (0.108052) | 0.135409 / 0.176557 (-0.041148) | 0.220385 / 0.737135 (-0.516750) | 0.143140 / 0.296338 (-0.153199) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471146 / 0.215209 (0.255937) | 4.645023 / 2.077655 (2.567368) | 2.126783 / 1.504120 (0.622663) | 1.907905 / 1.541195 (0.366710) | 1.969561 / 1.468490 (0.501071) | 0.798670 / 4.584777 (-3.786107) | 4.394787 / 3.745712 (0.649075) | 2.353535 / 5.269862 (-2.916327) | 1.501013 / 4.565676 (-3.064664) | 0.097472 / 0.424275 (-0.326803) | 0.014015 / 0.007607 (0.006408) | 0.589365 / 0.226044 (0.363320) | 5.897331 / 2.268929 (3.628402) | 2.656198 / 55.444624 (-52.788427) | 2.256082 / 6.876477 (-4.620395) | 2.271122 / 2.142072 (0.129050) | 0.961566 / 4.805227 (-3.843661) | 0.188303 / 6.500664 (-6.312361) | 0.073258 / 0.075469 (-0.002211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445266 / 1.841788 (-0.396522) | 16.876710 / 8.074308 (8.802402) | 16.004287 / 10.191392 (5.812895) | 0.212252 / 0.680424 (-0.468172) | 0.033186 / 0.534201 (-0.501015) | 0.520564 / 0.579283 (-0.058719) | 0.516865 / 0.434364 (0.082501) | 0.638482 / 0.540337 (0.098144) | 0.761959 / 1.386936 (-0.624977) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008101 / 0.011353 (-0.003252) | 0.005512 / 0.011008 (-0.005497) | 0.086138 / 0.038508 (0.047630) | 0.038605 / 0.023109 (0.015496) | 0.413082 / 0.275898 (0.137184) | 0.444016 / 0.323480 (0.120536) | 0.006196 / 0.007986 (-0.001790) | 0.005736 / 0.004328 (0.001408) | 0.086938 / 0.004250 (0.082688) | 0.052307 / 0.037052 (0.015255) | 0.415206 / 0.258489 (0.156717) | 0.481510 / 0.293841 (0.187669) | 0.041469 / 0.128546 (-0.087077) | 0.013481 / 0.075646 (-0.062165) | 0.101528 / 0.419271 (-0.317744) | 0.056507 / 0.043533 (0.012974) | 0.418166 / 0.255139 (0.163027) | 0.443834 / 0.283200 (0.160634) | 0.116434 / 0.141683 (-0.025249) | 1.651223 / 1.452155 (0.199068) | 1.746429 / 1.492716 (0.253713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242381 / 0.018006 (0.224375) | 0.478826 / 0.000490 (0.478337) | 0.000463 / 0.000200 (0.000264) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031743 / 0.037411 (-0.005668) | 0.126141 / 0.014526 (0.111616) | 0.134539 / 0.176557 (-0.042018) | 0.216546 / 0.737135 (-0.520590) | 0.143513 / 0.296338 (-0.152825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486915 / 0.215209 (0.271706) | 4.833812 / 2.077655 (2.756158) | 2.317785 / 1.504120 (0.813666) | 2.114181 / 1.541195 (0.572986) | 2.153896 / 1.468490 (0.685406) | 0.797490 / 4.584777 (-3.787287) | 4.369950 / 3.745712 (0.624238) | 2.305492 / 5.269862 (-2.964370) | 1.488860 / 4.565676 (-3.076816) | 0.098071 / 0.424275 (-0.326204) | 0.014129 / 0.007607 (0.006522) | 0.611311 / 0.226044 (0.385266) | 6.087482 / 2.268929 (3.818554) | 2.837676 / 55.444624 (-52.606948) | 2.451819 / 6.876477 (-4.424657) | 2.456763 / 2.142072 (0.314690) | 0.957637 / 4.805227 (-3.847590) | 0.190974 / 6.500664 (-6.309690) | 0.074497 / 0.075469 (-0.000972) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.466214 / 1.841788 (-0.375574) | 17.063925 / 8.074308 (8.989617) | 14.630326 / 10.191392 (4.438934) | 0.170570 / 0.680424 (-0.509854) | 0.023794 / 0.534201 (-0.510407) | 0.509175 / 0.579283 (-0.070108) | 0.506485 / 0.434364 (0.072121) | 0.616965 / 0.540337 (0.076628) | 0.718176 / 1.386936 (-0.668760) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-27T17:04:46Z
| 2023-02-28T13:52:15Z
| 2023-02-28T13:44:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5583.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5583",
"merged_at": "2023-02-28T13:44:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5583.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5583"
}
|
Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV)
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5583/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5583/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4709/events
|
https://github.com/huggingface/datasets/issues/4709
| 1,308,633,093
|
I_kwDODunzps5OACgF
| 4,709
|
WMT21 & WMT22
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Muennighoff",
"id": 62820084,
"login": "Muennighoff",
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Muennighoff"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/92247226?v=4",
"events_url": "https://api.github.com/users/Etelis/events{/privacy}",
"followers_url": "https://api.github.com/users/Etelis/followers",
"following_url": "https://api.github.com/users/Etelis/following{/other_user}",
"gists_url": "https://api.github.com/users/Etelis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Etelis",
"id": 92247226,
"login": "Etelis",
"node_id": "U_kgDOBX-Uug",
"organizations_url": "https://api.github.com/users/Etelis/orgs",
"received_events_url": "https://api.github.com/users/Etelis/received_events",
"repos_url": "https://api.github.com/users/Etelis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Etelis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Etelis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Etelis"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/92247226?v=4",
"events_url": "https://api.github.com/users/Etelis/events{/privacy}",
"followers_url": "https://api.github.com/users/Etelis/followers",
"following_url": "https://api.github.com/users/Etelis/following{/other_user}",
"gists_url": "https://api.github.com/users/Etelis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Etelis",
"id": 92247226,
"login": "Etelis",
"node_id": "U_kgDOBX-Uug",
"organizations_url": "https://api.github.com/users/Etelis/orgs",
"received_events_url": "https://api.github.com/users/Etelis/received_events",
"repos_url": "https://api.github.com/users/Etelis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Etelis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Etelis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Etelis"
}
] | null |
[
"Hi ! That would be awesome to have them indeed, thanks for opening this issue\r\n\r\nI just added you to the WMT org on the HF Hub if you're interested in adding those datasets.\r\n\r\nFeel free to create a dataset repository for each dataset and upload the data files there :) preferably in ZIP archives instead of TAR archives (the current WMT scripts don't support streaming TAR archives, so it would break the dataset preview). We've also had issues with the `statmt.org` host (data unavailable, slow download speed), that's why I think it's better if we re-host the files on the Hub.\r\n\r\n`wmt21` (and wmt22) can be added <s>in this GitHub repository I think</s> on the HF Hub under the `WMT` org (we'll move the previous ones to this org soon as well).\r\nTo add it, you can copy paste the code of the previous one (e.g. wmt19), and add the new data:\r\n- in wmt_utils.py, add the new data subsets. You need to provide the download URLs, as well as the target and source languages\r\n- in wmt21.py (renamed from wmt19.py), you can specify the subsets that WMT21 uses (i.e. the one you just added)\r\n- in wmt_utils.py, define the python function that must be used to parse the subsets you added. To do so, you must go in `_generate_examples` and chose the proper `sub_generator` based on the subset name. For example, the `paracrawl_v3` subset uses the `_parse_tmx` function:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/datasets/wmt19/wmt_utils.py#L834-L835\r\n\r\nHopefully the data is in a format that is already supported and there's no need to write a new `_parse_*` function for the new subsets. Let me know if you have questions or if I can help :)",
"@Muennighoff , @lhoestq let me know if you want me to look into this. Happy to help bring WMT21 & WMT22 datasets into 🤗 ! ",
"Hi @srhrshr :) Sure, feel free to create a dataset repository on the Hub and start from the implementation of WMT19 if you want. Then we can move the dataset under the WMT org (we'll move the other ones there as well).\r\n\r\nLet me know if you have questions or if I can help",
"#self-assign",
"#self-assign",
"Hello @lhoestq ,\r\n\r\nWould it be possible for me to be granted in the WMT organization (on hf ofc) in order to facilitate dataset uploads? I've already initiated the joining process at this link: https://huggingface.co/wmt\r\n\r\nI appreciate your help with this. Thank you!",
"Hi ! Cool I just added you"
] | 2022-07-18T21:05:33Z
| 2023-06-20T09:02:11Z
| null |
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** WMT21 & WMT22
- **Description:** We are going to have three tracks: two small tasks and a large task.
The small tracks evaluate translation between fairly related languages and English (all pairs). The large track uses 101 languages.
- **Paper:** /
- **Data:** https://statmt.org/wmt21/large-scale-multilingual-translation-task.html https://statmt.org/wmt22/large-scale-multilingual-translation-task.html
- **Motivation:** Many more languages than previous WMT versions - Could be very high impact
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
I could also tackle this. I saw the existing logic for WMT models is a bit complex (datasets are stored on the wmt account & retrieved in separate wmt datasets afaict). How long do you think it would take me? @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4709/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/185
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/185/comments
|
https://api.github.com/repos/huggingface/datasets/issues/185/events
|
https://github.com/huggingface/datasets/pull/185
| 623,172,484
|
MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2
| 185
|
[Commands] In-detail instructions to create dummy data folder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"awesome !"
] | 2020-05-22T12:26:25Z
| 2020-05-22T14:06:35Z
| 2020-05-22T14:06:34Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"merged_at": "2020-05-22T14:06:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185"
}
|
### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions.
### CONTRIBUTING.md
Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset".
### Current PRs
It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/185/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3626
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3626/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3626/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3626/events
|
https://github.com/huggingface/datasets/issues/3626
| 1,113,534,436
|
I_kwDODunzps5CXy_k
| 3,626
|
The Pile cannot connect to host
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2022-01-25T07:43:33Z
| 2022-02-14T08:40:58Z
| 2022-02-14T08:40:58Z
|
MEMBER
| null | null | null |
## Describe the bug
The Pile had issues with their previous host server and have mirrored its content to another server.
The new URL server should be updated.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3626/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3626/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4113
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4113/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4113/events
|
https://github.com/huggingface/datasets/issues/4113
| 1,194,843,532
|
I_kwDODunzps5HN92M
| 4,113
|
Multiprocessing with FileLock fails in python 3.9
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Closing this one because it must be used this way actually:\r\n```python\r\ndef main():\r\n with FileLock(\"tmp.lock\"):\r\n with Pool(2) as pool:\r\n pool.map(run, range(2))\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```"
] | 2022-04-06T16:27:09Z
| 2022-11-28T11:49:14Z
| 2022-11-28T11:49:14Z
|
MEMBER
| null | null | null |
On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.
This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.
Let's see if we can fix this and have a CI that runs on 3.9.
cc @mariosasko @julien-c
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4113/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6388
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6388/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6388/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6388/events
|
https://github.com/huggingface/datasets/issues/6388
| 1,981,136,093
|
I_kwDODunzps52Fbzd
| 6,388
|
How to create 3d medical imgae dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41177312?v=4",
"events_url": "https://api.github.com/users/QingYunA/events{/privacy}",
"followers_url": "https://api.github.com/users/QingYunA/followers",
"following_url": "https://api.github.com/users/QingYunA/following{/other_user}",
"gists_url": "https://api.github.com/users/QingYunA/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingYunA",
"id": 41177312,
"login": "QingYunA",
"node_id": "MDQ6VXNlcjQxMTc3MzEy",
"organizations_url": "https://api.github.com/users/QingYunA/orgs",
"received_events_url": "https://api.github.com/users/QingYunA/received_events",
"repos_url": "https://api.github.com/users/QingYunA/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingYunA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingYunA/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingYunA"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2023-11-07T11:27:36Z
| 2023-11-07T11:28:53Z
| null |
NONE
| null | null | null |
### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to add this feature
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6388/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6388/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5332
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5332/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5332/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5332/events
|
https://github.com/huggingface/datasets/issues/5332
| 1,476,513,072
|
I_kwDODunzps5YAc0w
| 5,332
|
Passing numpy array to ClassLabel names causes ValueError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4",
"events_url": "https://api.github.com/users/freddyheppell/events{/privacy}",
"followers_url": "https://api.github.com/users/freddyheppell/followers",
"following_url": "https://api.github.com/users/freddyheppell/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/freddyheppell",
"id": 1475568,
"login": "freddyheppell",
"node_id": "MDQ6VXNlcjE0NzU1Njg=",
"organizations_url": "https://api.github.com/users/freddyheppell/orgs",
"received_events_url": "https://api.github.com/users/freddyheppell/received_events",
"repos_url": "https://api.github.com/users/freddyheppell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/freddyheppell"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Should `datasets` allow `ClassLabel` input parameter to be an `np.array` even though internally we need to cast it to a Python list? @lhoestq @mariosasko ",
"Hi! No, I don't think so. The `names` parameter is [annotated](https://github.com/huggingface/datasets/blob/582236640b9109988e5f7a16a8353696ffa09a16/src/datasets/features/features.py#L892) as `List[str]` (**NumPy arrays are not lists**), and considering that type checking is not a common practice in Python, I think we can leave the code as-is.",
"I appreciate it is the wrong type, and that type checking is not common, but I think there's a few circumstances that make it a good idea from a usability perspective.\r\n\r\nIt's quite a difficult error to debug because it comes from a utility function (so it's not immediately obvious which parameter caused it). What makes it even more difficult is the exception happens when the features instance is used to instantiate the dataset, **not** when when the wrong type is actually passed when the features is instantiated. When I was debugging the error, I didn't really consider it could be an issue with the features instance because it had instantiated fine. It's also not one of the more common exceptions caused by trying to use a non-list as a list.\r\n\r\nIt's also relatively easy to accidentally get a numpy array of class types (e.g. calling `unique()` on a pandas dataframe column). Additionally, passing in a `set` instead of the list (again, relatively easy because people may run `set(classes)` to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\nThe names list is already being processed and validated in the `__post_init__` method anyway, so it would not really be adding any complexity to check it is actually a list here too. I'm happy to contribute this change if you change your mind about whether it's worthwhile.",
"I agree that it's not easy to debug this issue, so perhaps we could add some basic type checking (e.g. `not isinstance(names, list)` -> error) to make debugging easier. Feel free to submit a PR.\r\n\r\n> Additionally, passing in a set instead of the list (again, relatively easy because people may run set(classes) to generate uniques) causes an error when the features instance is used, albeit a slightly more obvious one.\r\n\r\n`set` is an unordered structure (it's ordered in Python 3.6+, but this is CPython's implementation detail), and the order of ClassLabel `names` matters, so this doesn't require a fix.",
"What about checking for `Sequence` instead? I think users can pass a list or a tuple as well."
] | 2022-12-05T12:59:03Z
| 2022-12-22T16:32:50Z
| 2022-12-22T16:32:50Z
|
CONTRIBUTOR
| null | null | null |
### Describe the bug
If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX
TLDR:
If I define my classes as:
```
my_classes = np.array(['one', 'two', 'three'])
```
Then this errors:
```py
features = Features({'value': Value('string'), 'label': ClassLabel(names=my_classes)})
dataset = Dataset.from_list(my_data, features=features)
```
```
ValueError Traceback (most recent call last)
[<ipython-input-8-a8a9d53ec82f>](https://localhost:8080/#) in <module>
----> 1 dataset = Dataset.from_list(my_data, features=features)
11 frames
[/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _asdict_inner(obj)
183 for f in fields(obj):
184 value = _asdict_inner(getattr(obj, f.name))
--> 185 if not f.init or value != f.default or f.metadata.get("include_in_asdict_even_if_is_default", False):
186 result[f.name] = value
187 return result
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
But this works:
```
features2 = Features({'value': Value('string'), 'label': ClassLabel(names=list(my_classes))})
dataset2 = Dataset.from_list(my_data, features=features2)
```
### Expected behavior
If I provide a numpy array of class names, I would expect either an error that the names list is the wrong type, or for it to be cast internally.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
Additionally:
- Numpy version: 1.23.5
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5332/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5332/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4060/events
|
https://github.com/huggingface/datasets/pull/4060
| 1,186,281,033
|
PR_kwDODunzps41Tbmg
| 4,060
|
Deprecate canonical Multilingual Librispeech
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, as discussed in #4006 we should update facebook/multilingual_librispeech indeed before we do a release. @anton-l could you help taking care of updating facebook/multilingual_librispeech ? We need to update the task template\r\n```python\r\ntask_templates=[AutomaticSpeechRecognition(audio_column=\"audio\", transcription_column=\"text\")],\r\n```\r\nand write that `datasets>=2.1` is necessary to load it in the dataset card.\r\n\r\nOnce the change is done we can merge this PR and do the release I think",
"@polinaeterna @lhoestq \r\nUpdated the script and the dataset card: https://huggingface.co/datasets/facebook/multilingual_librispeech ",
"@anton-l @lhoestq now previewer doesn't work for this datasets as it cannot recognize new `audio_column` argument:\r\n\r\n\r\nI'm not an expert in previewer things, where should I look into the corresponding code?",
"Yes, there are several datasets with the same error, eg https://github.com/huggingface/datasets-preview-backend/issues/188. I'm not sure what I should do to fix this? Upgrade datasets to master?\r\n",
"@anton-l ended up removing the task template in facebook/multilingual_librispeech to make it work for the current version of `datasets` and fix the viewer :) thanks !",
"@lhoestq can we merge now? ^^"
] | 2022-03-30T10:56:56Z
| 2022-04-01T12:54:05Z
| 2022-04-01T12:48:51Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4060",
"merged_at": "2022-04-01T12:48:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4060"
}
|
Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming.
However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org.
Hm, and the code should be change after the release, no?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4060/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3964/events
|
https://github.com/huggingface/datasets/issues/3964
| 1,173,564,993
|
I_kwDODunzps5F8y5B
| 3,964
|
Add default Audio Loader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null |
[] | 2022-03-18T12:58:55Z
| 2022-08-22T14:20:46Z
| 2022-08-22T14:20:46Z
|
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
Writing a custom loading dataset script might be a bit challenging for users.
**Describe the solution you'd like**
Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure.
**Describe alternatives you've considered**
Create a custom loading script? that's what users doing now.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3964/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4013
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4013/events
|
https://github.com/huggingface/datasets/issues/4013
| 1,180,427,174
|
I_kwDODunzps5GW-Om
| 4,013
|
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4",
"events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}",
"followers_url": "https://api.github.com/users/hazalturkmen/followers",
"following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}",
"gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hazalturkmen",
"id": 42860397,
"login": "hazalturkmen",
"node_id": "MDQ6VXNlcjQyODYwMzk3",
"organizations_url": "https://api.github.com/users/hazalturkmen/orgs",
"received_events_url": "https://api.github.com/users/hazalturkmen/received_events",
"repos_url": "https://api.github.com/users/hazalturkmen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hazalturkmen"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).",
"thanks for reply :)"
] | 2022-03-25T07:12:02Z
| 2022-04-04T08:05:01Z
| 2022-03-25T14:16:11Z
|
NONE
| null | null | null |
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true
```
Am I the one who added this dataset ? Yes
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4723/events
|
https://github.com/huggingface/datasets/pull/4723
| 1,310,970,604
|
PR_kwDODunzps47uoSj
| 4,723
|
Refactor conftest fixtures
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-20T12:15:22Z
| 2022-07-21T14:37:11Z
| 2022-07-21T14:24:18Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4723",
"merged_at": "2022-07-21T14:24:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4723"
}
|
Previously, fixture modules `hub_fixtures` and `s3_fixtures`:
- were both at the root test directory
- were imported using `import *`
- as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`
This PR:
- puts both fixture modules in a dedicated directory `fixtures`
- renames both to: `fixtures.hub` and `fixtures.s3`
- imports them into `conftest` as plugins, using the `pytest_plugins`: this avoids the `import *`
- additionally creates a new fixture module `fixtures.files` with all file-related fixtures
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4723/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4723/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/673/events
|
https://github.com/huggingface/datasets/issues/673
| 709,603,989
|
MDU6SXNzdWU3MDk2MDM5ODk=
| 673
|
blog_authorship_corpus crashed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4",
"events_url": "https://api.github.com/users/Moshiii/events{/privacy}",
"followers_url": "https://api.github.com/users/Moshiii/followers",
"following_url": "https://api.github.com/users/Moshiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moshiii",
"id": 7553188,
"login": "Moshiii",
"node_id": "MDQ6VXNlcjc1NTMxODg=",
"organizations_url": "https://api.github.com/users/Moshiii/orgs",
"received_events_url": "https://api.github.com/users/Moshiii/received_events",
"repos_url": "https://api.github.com/users/Moshiii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moshiii"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting !\r\nWe'll free some memory"
] | 2020-09-26T20:15:28Z
| 2022-02-15T10:47:58Z
| 2022-02-15T10:47:58Z
|
NONE
| null | null | null |
This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/673/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3560
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3560/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3560/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3560/events
|
https://github.com/huggingface/datasets/pull/3560
| 1,098,280,652
|
PR_kwDODunzps4wwOMf
| 3,560
|
Run pyupgrade for Python 3.6+
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.",
"> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?",
"I just resolved some conflicts with the master branch. If the CI is green we can merge :)"
] | 2022-01-10T19:20:53Z
| 2022-01-31T13:38:49Z
| 2022-01-31T09:37:34Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3560",
"merged_at": "2022-01-31T09:37:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3560"
}
|
Run the command:
```bash
pyupgrade $(find . -name "*.py" -type f) --py36-plus
```
Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+.
It was originally part of #3489.
Tip for reviewing faster: use the CLI (`git diff`) and scroll.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3560/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3560/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3559
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3559/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3559/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3559/events
|
https://github.com/huggingface/datasets/pull/3559
| 1,098,178,222
|
PR_kwDODunzps4wv420
| 3,559
|
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-01-10T17:27:40Z
| 2022-01-12T15:13:58Z
| 2022-01-12T15:13:57Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3559",
"merged_at": "2022-01-12T15:13:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3559"
}
|
Fix #3555
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3559/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3559/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5615/events
|
https://github.com/huggingface/datasets/issues/5615
| 1,612,552,653
|
I_kwDODunzps5gHZnN
| 5,615
|
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4",
"events_url": "https://api.github.com/users/zsaladin/events{/privacy}",
"followers_url": "https://api.github.com/users/zsaladin/followers",
"following_url": "https://api.github.com/users/zsaladin/following{/other_user}",
"gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zsaladin",
"id": 6466389,
"login": "zsaladin",
"node_id": "MDQ6VXNlcjY0NjYzODk=",
"organizations_url": "https://api.github.com/users/zsaladin/orgs",
"received_events_url": "https://api.github.com/users/zsaladin/received_events",
"repos_url": "https://api.github.com/users/zsaladin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zsaladin"
}
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this."
] | 2023-03-07T01:52:00Z
| 2023-03-09T15:24:05Z
| 2023-03-09T15:23:54Z
|
NONE
| null | null | null |
### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
I wrote codes below to make it.
```py
def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset:
iter_add_dataset = iter(add_dataset)
def add_column_fn(example):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: next(iter_add_dataset)[key]}
return dataset.map(add_column_fn)
```
Is there other way to do it? Or is it intended?
### Steps to reproduce the bug
Thie codes below occurs `NotImplementedError`
```py
from datasets import IterableDataset
def gen(num):
yield {f"col{num}": 1}
yield {f"col{num}": 2}
yield {f"col{num}": 3}
ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1})
ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2})
new_ids = ids1.add_column("new_col", ids1)
for row in new_ids:
print(row)
```
### Expected behavior
`IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1304
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1304/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1304/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1304/events
|
https://github.com/huggingface/datasets/pull/1304
| 759,440,841
|
MDExOlB1bGxSZXF1ZXN0NTM0NDQ2Nzcy
| 1,304
|
adding eitb_parcc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-08T13:20:54Z
| 2020-12-09T18:02:54Z
| 2020-12-09T18:02:03Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1304",
"merged_at": "2020-12-09T18:02:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1304"
}
|
Adding EiTB-ParCC: Parallel Corpus of Comparable News
http://opus.nlpl.eu/EiTB-ParCC.php
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1304/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1304/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2326
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2326/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2326/events
|
https://github.com/huggingface/datasets/pull/2326
| 876,829,254
|
MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4
| 2,326
|
Enable auto-download for PAN-X / Wikiann domain in XTREME
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-05-05T20:58:38Z
| 2021-05-07T08:41:10Z
| 2021-05-07T08:41:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"merged_at": "2021-05-07T08:41:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326"
}
|
This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2326/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4166
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4166/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4166/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4166/events
|
https://github.com/huggingface/datasets/pull/4166
| 1,203,758,004
|
PR_kwDODunzps42M0dS
| 4,166
|
Fix exact match
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T20:28:06Z
| 2022-05-03T12:23:31Z
| 2022-05-03T12:16:27Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4166.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4166",
"merged_at": "2022-05-03T12:16:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4166.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4166"
}
|
Clarify docs and add clarifying example to the exact_match metric
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4166/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4166/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6384
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6384/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6384/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6384/events
|
https://github.com/huggingface/datasets/issues/6384
| 1,979,117,069
|
I_kwDODunzps519u4N
| 6,384
|
Load the local dataset folder from other place
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"events_url": "https://api.github.com/users/OrangeSodahub/events{/privacy}",
"followers_url": "https://api.github.com/users/OrangeSodahub/followers",
"following_url": "https://api.github.com/users/OrangeSodahub/following{/other_user}",
"gists_url": "https://api.github.com/users/OrangeSodahub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OrangeSodahub",
"id": 54439582,
"login": "OrangeSodahub",
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"organizations_url": "https://api.github.com/users/OrangeSodahub/orgs",
"received_events_url": "https://api.github.com/users/OrangeSodahub/received_events",
"repos_url": "https://api.github.com/users/OrangeSodahub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OrangeSodahub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OrangeSodahub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OrangeSodahub"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Solved"
] | 2023-11-06T13:07:04Z
| 2023-11-19T05:42:06Z
| 2023-11-19T05:42:05Z
|
NONE
| null | null | null |
This is from https://github.com/huggingface/diffusers/issues/5573
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6384/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6384/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2176
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2176/events
|
https://github.com/huggingface/datasets/issues/2176
| 851,865,795
|
MDU6SXNzdWU4NTE4NjU3OTU=
| 2,176
|
Converting a Value to a ClassLabel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class_names))\r\ndset = dset.map(lambda str_value: {col_name: class_feature.str2int(str_value)}, input_columns=col_name)\r\n\r\ndset = dset.cast(features.Features({\r\n ...\r\n col_name: class_feature\r\n})\r\n```\r\n",
"Hi! You can use `Dataset.class_encode_column` for this. And in the next release of `datasets` (this feature is only available on `master`), you'll also be able to use `cast` to do the conversion. \r\n\r\nAn example of conversion via `cast`: \r\n```python\r\nfrom datasets import Dataset, Features, ClassLabel\r\nd = Dataset.from_dict({\"a\": [\"no\", \"yes\", \"no\"]})\r\nd = d.cast(Features({\"a\": ClassLabel(names=[\"yes\", \"no\"])}))\r\n```"
] | 2021-04-06T22:54:16Z
| 2022-06-01T16:31:49Z
| 2022-06-01T16:31:49Z
|
NONE
| null | null | null |
Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2362
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2362/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2362/events
|
https://github.com/huggingface/datasets/pull/2362
| 892,100,749
|
MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw
| 2,362
|
Fix web_nlg metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n\r\n\r\nTherefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.\r\n\r\nMoreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset(\"indic_glue\", \"sna.bn\")`.\r\n\r\nIs this something that can be fixed on the moonlanding side instead ?",
"> Is this something that can be fixed on the moonlanding side instead ?\r\n\r\nNot really unless we change database:)\r\n\r\nWe'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata",
"Ok, should we close this PR then ?"
] | 2021-05-14T17:15:07Z
| 2021-05-17T13:44:17Z
| 2021-05-17T13:42:28Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2362",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2362"
}
|
Our metadata storage system does not support `.` inside keys. cc @Pierrci
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2362/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2760/events
|
https://github.com/huggingface/datasets/issues/2760
| 961,372,667
|
MDU6SXNzdWU5NjEzNzI2Njc=
| 2,760
|
Add Nuswide dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19774925?v=4",
"events_url": "https://api.github.com/users/shivangibithel/events{/privacy}",
"followers_url": "https://api.github.com/users/shivangibithel/followers",
"following_url": "https://api.github.com/users/shivangibithel/following{/other_user}",
"gists_url": "https://api.github.com/users/shivangibithel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shivangibithel",
"id": 19774925,
"login": "shivangibithel",
"node_id": "MDQ6VXNlcjE5Nzc0OTI1",
"organizations_url": "https://api.github.com/users/shivangibithel/orgs",
"received_events_url": "https://api.github.com/users/shivangibithel/received_events",
"repos_url": "https://api.github.com/users/shivangibithel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shivangibithel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivangibithel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shivangibithel"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
open
| false
| null |
[] | null |
[] | 2021-08-05T03:00:41Z
| 2021-12-08T12:06:23Z
| null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *NUSWIDE*
- **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)*
- **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)*
- **Data:** *[here](https://github.com/wenting-zhao/nuswide)*
- **Motivation:** *This dataset is a benchmark in the Text Retrieval task.*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2760/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/432
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/432/comments
|
https://api.github.com/repos/huggingface/datasets/issues/432/events
|
https://github.com/huggingface/datasets/pull/432
| 665,234,340
|
MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3
| 432
|
Fix handling of config files while loading datasets from multiple processes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/orsharir",
"id": 99543,
"login": "orsharir",
"node_id": "MDQ6VXNlcjk5NTQz",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"repos_url": "https://api.github.com/users/orsharir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/orsharir"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)",
"Ok I see.\r\nWhy not use filelock in this case then ?",
"I think we should 🙂",
"Thanks for approving my patch.\n\nI agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.\n\nI'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it."
] | 2020-07-24T15:10:57Z
| 2020-08-01T17:11:42Z
| 2020-07-30T08:25:28Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"merged_at": "2020-07-30T08:25:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432"
}
|
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/432/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3084
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3084/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3084/events
|
https://github.com/huggingface/datasets/issues/3084
| 1,026,428,992
|
I_kwDODunzps49LhBA
| 3,084
|
VisibleDeprecationWarning when using `set_format("numpy")`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)"
] | 2021-10-14T13:53:01Z
| 2021-10-22T16:04:14Z
| 2021-10-22T16:04:14Z
|
MEMBER
| null | null | null |
Code to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("glue", "mnli")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased')
def tokenize_function(dataset):
return tokenizer(dataset['premise'])
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features)
tokenized_datasets.set_format("numpy")
tokenized_datasets['train'][5:8]
```
Outputs:
```
python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3084/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4485
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4485/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4485/events
|
https://github.com/huggingface/datasets/pull/4485
| 1,269,463,054
|
PR_kwDODunzps45kD7A
| 4,485
|
Fix cast to null
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-13T13:44:32Z
| 2022-06-14T13:43:54Z
| 2022-06-14T13:34:14Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4485",
"merged_at": "2022-06-14T13:34:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4485"
}
|
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type.
Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type).
Fix https://github.com/huggingface/datasets/issues/4483
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4485/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4262
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4262/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4262/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4262/events
|
https://github.com/huggingface/datasets/pull/4262
| 1,222,130,749
|
PR_kwDODunzps43IOye
| 4,262
|
Add YAML tags to Dataset Card rotten tomatoes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mo6zes",
"id": 10004251,
"login": "mo6zes",
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mo6zes"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-01T11:59:08Z
| 2022-05-03T14:27:33Z
| 2022-05-03T14:20:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4262",
"merged_at": "2022-05-03T14:20:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4262"
}
|
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4262/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4262/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3578
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3578/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3578/events
|
https://github.com/huggingface/datasets/issues/3578
| 1,103,403,287
|
I_kwDODunzps5BxJkX
| 3,578
|
label information get lost after parquet serialization
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4",
"events_url": "https://api.github.com/users/Tudyx/events{/privacy}",
"followers_url": "https://api.github.com/users/Tudyx/followers",
"following_url": "https://api.github.com/users/Tudyx/following{/other_user}",
"gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tudyx",
"id": 56633664,
"login": "Tudyx",
"node_id": "MDQ6VXNlcjU2NjMzNjY0",
"organizations_url": "https://api.github.com/users/Tudyx/orgs",
"received_events_url": "https://api.github.com/users/Tudyx/received_events",
"repos_url": "https://api.github.com/users/Tudyx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tudyx"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file",
"This info is stored in the Parquet schema metadata as of https://github.com/huggingface/datasets/pull/5516"
] | 2022-01-14T10:10:38Z
| 2023-07-25T15:44:53Z
| 2023-07-25T15:44:53Z
|
NONE
| null | null | null |
## Describe the bug
In *dataset_info.json* file, information about the label get lost after the dataset serialization.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# normal save
dataset = load_dataset('glue', 'sst2', split='train')
dataset.save_to_disk("normal_save")
# save after parquet serialization
dataset.to_parquet("glue-sst2-train.parquet")
dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet')
dataset.save_to_disk("save_after_parquet")
```
## Expected results
I expected to keep label information in *dataset_info.json* file even after parquet serialization
## Actual results
In the normal serialization i got
```json
"label": {
"num_classes": 2,
"names": [
"negative",
"positive"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
```
And after parquet serialization i got
```json
"label": {
"dtype": "int64",
"id": null,
"_type": "Value"
},
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.0
- Platform: ubuntu 20.04
- Python version: 3.8.10
- PyArrow version: 6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3578/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6316
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6316/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6316/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6316/events
|
https://github.com/huggingface/datasets/pull/6316
| 1,951,819,869
|
PR_kwDODunzps5dQGpg
| 6,316
|
Fix loading Hub datasets with CSV metadata file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008896 / 0.011353 (-0.002456) | 0.005811 / 0.011008 (-0.005197) | 0.108582 / 0.038508 (0.070074) | 0.096509 / 0.023109 (0.073399) | 0.481725 / 0.275898 (0.205827) | 0.534743 / 0.323480 (0.211263) | 0.005517 / 0.007986 (-0.002468) | 0.006479 / 0.004328 (0.002151) | 0.081313 / 0.004250 (0.077062) | 0.063578 / 0.037052 (0.026525) | 0.493977 / 0.258489 (0.235488) | 0.551897 / 0.293841 (0.258056) | 0.051835 / 0.128546 (-0.076711) | 0.014105 / 0.075646 (-0.061541) | 0.385866 / 0.419271 (-0.033405) | 0.069131 / 0.043533 (0.025598) | 0.484780 / 0.255139 (0.229641) | 0.493221 / 0.283200 (0.210021) | 0.039560 / 0.141683 (-0.102123) | 1.782331 / 1.452155 (0.330176) | 1.899193 / 1.492716 (0.406477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329978 / 0.018006 (0.311972) | 0.600839 / 0.000490 (0.600349) | 0.013187 / 0.000200 (0.012987) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031835 / 0.037411 (-0.005576) | 0.103740 / 0.014526 (0.089214) | 0.115875 / 0.176557 (-0.060681) | 0.189880 / 0.737135 (-0.547255) | 0.132614 / 0.296338 (-0.163725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.596255 / 0.215209 (0.381046) | 5.967993 / 2.077655 (3.890339) | 2.612675 / 1.504120 (1.108555) | 2.251461 / 1.541195 (0.710266) | 2.308585 / 1.468490 (0.840095) | 0.816516 / 4.584777 (-3.768261) | 5.241791 / 3.745712 (1.496079) | 4.680745 / 5.269862 (-0.589117) | 2.997370 / 4.565676 (-1.568307) | 0.098632 / 0.424275 (-0.325643) | 0.010912 / 0.007607 (0.003305) | 0.659092 / 0.226044 (0.433047) | 6.825562 / 2.268929 (4.556634) | 3.323844 / 55.444624 (-52.120780) | 2.796203 / 6.876477 (-4.080274) | 2.946994 / 2.142072 (0.804922) | 1.002814 / 4.805227 (-3.802413) | 0.202613 / 6.500664 (-6.298051) | 0.072011 / 0.075469 (-0.003459) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.613873 / 1.841788 (-0.227914) | 24.500990 / 8.074308 (16.426682) | 21.941599 / 10.191392 (11.750207) | 0.214450 / 0.680424 (-0.465974) | 0.031227 / 0.534201 (-0.502974) | 0.498297 / 0.579283 (-0.080986) | 0.597460 / 0.434364 (0.163096) | 0.558152 / 0.540337 (0.017815) | 0.789693 / 1.386936 (-0.597243) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011299 / 0.011353 (-0.000053) | 0.005103 / 0.011008 (-0.005905) | 0.083161 / 0.038508 (0.044653) | 0.094201 / 0.023109 (0.071092) | 0.560457 / 0.275898 (0.284559) | 0.590459 / 0.323480 (0.266980) | 0.007059 / 0.007986 (-0.000926) | 0.004418 / 0.004328 (0.000090) | 0.081343 / 0.004250 (0.077093) | 0.067069 / 0.037052 (0.030016) | 0.538137 / 0.258489 (0.279648) | 0.600416 / 0.293841 (0.306575) | 0.049046 / 0.128546 (-0.079500) | 0.014299 / 0.075646 (-0.061347) | 0.093631 / 0.419271 (-0.325641) | 0.062536 / 0.043533 (0.019003) | 0.557238 / 0.255139 (0.302099) | 0.571050 / 0.283200 (0.287850) | 0.035881 / 0.141683 (-0.105802) | 1.918487 / 1.452155 (0.466332) | 2.013979 / 1.492716 (0.521263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.400995 / 0.018006 (0.382989) | 0.634898 / 0.000490 (0.634408) | 0.041809 / 0.000200 (0.041609) | 0.000279 / 0.000054 (0.000224) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034160 / 0.037411 (-0.003251) | 0.109996 / 0.014526 (0.095470) | 0.124335 / 0.176557 (-0.052222) | 0.188100 / 0.737135 (-0.549035) | 0.135897 / 0.296338 (-0.160442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639751 / 0.215209 (0.424542) | 6.403312 / 2.077655 (4.325657) | 3.146453 / 1.504120 (1.642333) | 2.840358 / 1.541195 (1.299164) | 2.908667 / 1.468490 (1.440177) | 0.818767 / 4.584777 (-3.766010) | 5.416939 / 3.745712 (1.671227) | 4.853498 / 5.269862 (-0.416364) | 3.023526 / 4.565676 (-1.542150) | 0.110850 / 0.424275 (-0.313425) | 0.013103 / 0.007607 (0.005496) | 0.799720 / 0.226044 (0.573676) | 7.837704 / 2.268929 (5.568775) | 4.016526 / 55.444624 (-51.428099) | 3.338965 / 6.876477 (-3.537512) | 3.715721 / 2.142072 (1.573648) | 1.088340 / 4.805227 (-3.716887) | 0.213610 / 6.500664 (-6.287054) | 0.079244 / 0.075469 (0.003775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.833175 / 1.841788 (-0.008612) | 25.307218 / 8.074308 (17.232910) | 23.716075 / 10.191392 (13.524683) | 0.259114 / 0.680424 (-0.421310) | 0.035171 / 0.534201 (-0.499029) | 0.530128 / 0.579283 (-0.049155) | 0.651484 / 0.434364 (0.217120) | 0.589414 / 0.540337 (0.049077) | 0.862691 / 1.386936 (-0.524245) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Me too, I thought the same... quite surprised... :open_mouth: ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006929 / 0.011353 (-0.004423) | 0.004345 / 0.011008 (-0.006663) | 0.085522 / 0.038508 (0.047014) | 0.083380 / 0.023109 (0.060271) | 0.310332 / 0.275898 (0.034434) | 0.350525 / 0.323480 (0.027045) | 0.004367 / 0.007986 (-0.003618) | 0.005503 / 0.004328 (0.001175) | 0.066311 / 0.004250 (0.062061) | 0.059545 / 0.037052 (0.022492) | 0.314090 / 0.258489 (0.055601) | 0.366661 / 0.293841 (0.072821) | 0.031581 / 0.128546 (-0.096965) | 0.008852 / 0.075646 (-0.066794) | 0.289312 / 0.419271 (-0.129960) | 0.052960 / 0.043533 (0.009427) | 0.308134 / 0.255139 (0.052995) | 0.330342 / 0.283200 (0.047142) | 0.026157 / 0.141683 (-0.115526) | 1.488463 / 1.452155 (0.036308) | 1.561441 / 1.492716 (0.068725) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327735 / 0.018006 (0.309729) | 0.568162 / 0.000490 (0.567672) | 0.012097 / 0.000200 (0.011897) | 0.000438 / 0.000054 (0.000383) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029503 / 0.037411 (-0.007909) | 0.084327 / 0.014526 (0.069801) | 0.102065 / 0.176557 (-0.074492) | 0.157392 / 0.737135 (-0.579744) | 0.101428 / 0.296338 (-0.194910) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386767 / 0.215209 (0.171558) | 3.870757 / 2.077655 (1.793102) | 1.870048 / 1.504120 (0.365928) | 1.678221 / 1.541195 (0.137026) | 1.799423 / 1.468490 (0.330933) | 0.477718 / 4.584777 (-4.107059) | 3.618351 / 3.745712 (-0.127361) | 3.577921 / 5.269862 (-1.691941) | 2.146217 / 4.565676 (-2.419459) | 0.056290 / 0.424275 (-0.367985) | 0.007378 / 0.007607 (-0.000229) | 0.460678 / 0.226044 (0.234633) | 4.606243 / 2.268929 (2.337314) | 2.303460 / 55.444624 (-53.141164) | 1.982662 / 6.876477 (-4.893814) | 2.103891 / 2.142072 (-0.038182) | 0.570700 / 4.805227 (-4.234527) | 0.131747 / 6.500664 (-6.368918) | 0.060915 / 0.075469 (-0.014554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286364 / 1.841788 (-0.555424) | 20.106330 / 8.074308 (12.032022) | 14.780833 / 10.191392 (4.589441) | 0.164301 / 0.680424 (-0.516123) | 0.018730 / 0.534201 (-0.515471) | 0.398530 / 0.579283 (-0.180754) | 0.418084 / 0.434364 (-0.016280) | 0.468735 / 0.540337 (-0.071602) | 0.690122 / 1.386936 (-0.696814) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007262 / 0.011353 (-0.004091) | 0.004228 / 0.011008 (-0.006780) | 0.065866 / 0.038508 (0.027358) | 0.096151 / 0.023109 (0.073042) | 0.409352 / 0.275898 (0.133454) | 0.441234 / 0.323480 (0.117754) | 0.005946 / 0.007986 (-0.002039) | 0.003630 / 0.004328 (-0.000698) | 0.066271 / 0.004250 (0.062020) | 0.061567 / 0.037052 (0.024515) | 0.409097 / 0.258489 (0.150608) | 0.447675 / 0.293841 (0.153834) | 0.032804 / 0.128546 (-0.095743) | 0.008793 / 0.075646 (-0.066853) | 0.070790 / 0.419271 (-0.348482) | 0.048650 / 0.043533 (0.005117) | 0.411021 / 0.255139 (0.155882) | 0.421398 / 0.283200 (0.138198) | 0.025305 / 0.141683 (-0.116378) | 1.494826 / 1.452155 (0.042671) | 1.580441 / 1.492716 (0.087724) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.321871 / 0.018006 (0.303865) | 0.526471 / 0.000490 (0.525982) | 0.006913 / 0.000200 (0.006713) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034889 / 0.037411 (-0.002522) | 0.096096 / 0.014526 (0.081570) | 0.111920 / 0.176557 (-0.064636) | 0.166103 / 0.737135 (-0.571032) | 0.111162 / 0.296338 (-0.185176) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428037 / 0.215209 (0.212828) | 4.294150 / 2.077655 (2.216495) | 2.270331 / 1.504120 (0.766211) | 2.108235 / 1.541195 (0.567041) | 2.242560 / 1.468490 (0.774070) | 0.489941 / 4.584777 (-4.094836) | 3.688111 / 3.745712 (-0.057601) | 3.450180 / 5.269862 (-1.819681) | 2.175106 / 4.565676 (-2.390570) | 0.057657 / 0.424275 (-0.366619) | 0.007478 / 0.007607 (-0.000130) | 0.505242 / 0.226044 (0.279198) | 5.047817 / 2.268929 (2.778888) | 2.724125 / 55.444624 (-52.720500) | 2.419765 / 6.876477 (-4.456711) | 2.723231 / 2.142072 (0.581159) | 0.602382 / 4.805227 (-4.202846) | 0.132362 / 6.500664 (-6.368302) | 0.060600 / 0.075469 (-0.014869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.363356 / 1.841788 (-0.478431) | 21.446474 / 8.074308 (13.372165) | 15.074732 / 10.191392 (4.883340) | 0.191837 / 0.680424 (-0.488587) | 0.020565 / 0.534201 (-0.513636) | 0.396692 / 0.579283 (-0.182591) | 0.432390 / 0.434364 (-0.001974) | 0.491747 / 0.540337 (-0.048591) | 0.699203 / 1.386936 (-0.687733) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-19T10:21:34Z
| 2023-10-20T06:23:21Z
| 2023-10-20T06:14:09Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6316",
"merged_at": "2023-10-20T06:14:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6316"
}
|
Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- corresponds to the downloaded path: `/tmp/pytest-of-username/pytest-46/cache/datasets/downloads/9f5374dbb470f711f6b89d66a5eec1f19cc96324b26bcbebe29138bda6cb20e6`, which does not have extension
In the case where the metadata file does not have an extension, the reader assumes it is a JSONL file, thus the reported error when trying to read a CSV file as a JSONL one: `ArrowInvalid: JSON parse error: Invalid value. in row 0`
This behavior was introduced by:
- #4837
This PR extracts the metadata file extension from the original filename (instead of the downloaded one) and passes it as a parameter to the read_metadata function.
Fix #6315.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6316/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6316/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2705
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2705/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2705/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2705/events
|
https://github.com/huggingface/datasets/issues/2705
| 950,488,583
|
MDU6SXNzdWU5NTA0ODg1ODM=
| 2,705
|
404 not found error on loading WIKIANN dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39296659?v=4",
"events_url": "https://api.github.com/users/ronbutan/events{/privacy}",
"followers_url": "https://api.github.com/users/ronbutan/followers",
"following_url": "https://api.github.com/users/ronbutan/following{/other_user}",
"gists_url": "https://api.github.com/users/ronbutan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ronbutan",
"id": 39296659,
"login": "ronbutan",
"node_id": "MDQ6VXNlcjM5Mjk2NjU5",
"organizations_url": "https://api.github.com/users/ronbutan/orgs",
"received_events_url": "https://api.github.com/users/ronbutan/received_events",
"repos_url": "https://api.github.com/users/ronbutan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ronbutan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronbutan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ronbutan"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contacted the author by email to ask if they are planning to fix this issue. See the details here: https://github.com/huggingface/datasets/issues/2691#issuecomment-885463027\r\n\r\nI close this issue because it is the same as in #2691. Feel free to subscribe to that other issue to be informed about any updates."
] | 2021-07-22T09:55:50Z
| 2021-07-23T08:07:32Z
| 2021-07-23T08:07:32Z
|
NONE
| null | null | null |
## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook should display successful download status
## Actual results
FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.10.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2705/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2705/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5729/events
|
https://github.com/huggingface/datasets/pull/5729
| 1,661,929,923
|
PR_kwDODunzps5N_pvI
| 5,729
|
Fix nondeterministic sharded data split order
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006954 / 0.011353 (-0.004399) | 0.004947 / 0.011008 (-0.006061) | 0.086564 / 0.038508 (0.048056) | 0.031167 / 0.023109 (0.008058) | 0.262285 / 0.275898 (-0.013613) | 0.295753 / 0.323480 (-0.027727) | 0.005389 / 0.007986 (-0.002596) | 0.004130 / 0.004328 (-0.000198) | 0.065127 / 0.004250 (0.060877) | 0.042511 / 0.037052 (0.005458) | 0.263497 / 0.258489 (0.005008) | 0.307456 / 0.293841 (0.013615) | 0.031338 / 0.128546 (-0.097209) | 0.011023 / 0.075646 (-0.064623) | 0.295625 / 0.419271 (-0.123647) | 0.045813 / 0.043533 (0.002280) | 0.259369 / 0.255139 (0.004230) | 0.279325 / 0.283200 (-0.003875) | 0.099748 / 0.141683 (-0.041934) | 1.252572 / 1.452155 (-0.199583) | 1.347069 / 1.492716 (-0.145647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249726 / 0.018006 (0.231720) | 0.556882 / 0.000490 (0.556392) | 0.008237 / 0.000200 (0.008037) | 0.000294 / 0.000054 (0.000239) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026879 / 0.037411 (-0.010533) | 0.105141 / 0.014526 (0.090615) | 0.115473 / 0.176557 (-0.061084) | 0.172989 / 0.737135 (-0.564147) | 0.120433 / 0.296338 (-0.175906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400022 / 0.215209 (0.184812) | 3.965402 / 2.077655 (1.887747) | 1.805257 / 1.504120 (0.301138) | 1.610136 / 1.541195 (0.068941) | 1.661162 / 1.468490 (0.192672) | 0.695311 / 4.584777 (-3.889466) | 3.753757 / 3.745712 (0.008045) | 2.060609 / 5.269862 (-3.209253) | 1.333251 / 4.565676 (-3.232426) | 0.085790 / 0.424275 (-0.338485) | 0.012256 / 0.007607 (0.004649) | 0.502133 / 0.226044 (0.276088) | 5.040979 / 2.268929 (2.772051) | 2.310919 / 55.444624 (-53.133705) | 2.010534 / 6.876477 (-4.865943) | 2.132961 / 2.142072 (-0.009111) | 0.837636 / 4.805227 (-3.967592) | 0.169838 / 6.500664 (-6.330826) | 0.065003 / 0.075469 (-0.010466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218674 / 1.841788 (-0.623114) | 14.696076 / 8.074308 (6.621768) | 14.559492 / 10.191392 (4.368100) | 0.167761 / 0.680424 (-0.512663) | 0.017747 / 0.534201 (-0.516454) | 0.421624 / 0.579283 (-0.157659) | 0.414086 / 0.434364 (-0.020278) | 0.501398 / 0.540337 (-0.038940) | 0.596099 / 1.386936 (-0.790837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004123) | 0.005345 / 0.011008 (-0.005664) | 0.073739 / 0.038508 (0.035231) | 0.033440 / 0.023109 (0.010330) | 0.339790 / 0.275898 (0.063892) | 0.367857 / 0.323480 (0.044377) | 0.005927 / 0.007986 (-0.002058) | 0.004279 / 0.004328 (-0.000049) | 0.074247 / 0.004250 (0.069996) | 0.048971 / 0.037052 (0.011918) | 0.340235 / 0.258489 (0.081746) | 0.380521 / 0.293841 (0.086680) | 0.035322 / 0.128546 (-0.093225) | 0.012416 / 0.075646 (-0.063230) | 0.086060 / 0.419271 (-0.333212) | 0.049331 / 0.043533 (0.005799) | 0.342871 / 0.255139 (0.087732) | 0.355673 / 0.283200 (0.072473) | 0.111976 / 0.141683 (-0.029707) | 1.462530 / 1.452155 (0.010375) | 1.550336 / 1.492716 (0.057620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266560 / 0.018006 (0.248554) | 0.550886 / 0.000490 (0.550396) | 0.001069 / 0.000200 (0.000869) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028701 / 0.037411 (-0.008711) | 0.110535 / 0.014526 (0.096010) | 0.122846 / 0.176557 (-0.053711) | 0.176395 / 0.737135 (-0.560740) | 0.128653 / 0.296338 (-0.167685) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431693 / 0.215209 (0.216484) | 4.283691 / 2.077655 (2.206036) | 2.013967 / 1.504120 (0.509847) | 1.823914 / 1.541195 (0.282719) | 1.872055 / 1.468490 (0.403565) | 0.703318 / 4.584777 (-3.881459) | 3.783412 / 3.745712 (0.037699) | 2.950147 / 5.269862 (-2.319715) | 1.826159 / 4.565676 (-2.739518) | 0.086897 / 0.424275 (-0.337379) | 0.012512 / 0.007607 (0.004905) | 0.526730 / 0.226044 (0.300685) | 5.263871 / 2.268929 (2.994943) | 2.552163 / 55.444624 (-52.892462) | 2.276216 / 6.876477 (-4.600261) | 2.419934 / 2.142072 (0.277862) | 0.848235 / 4.805227 (-3.956993) | 0.170405 / 6.500664 (-6.330259) | 0.064979 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276780 / 1.841788 (-0.565008) | 15.100829 / 8.074308 (7.026521) | 15.117531 / 10.191392 (4.926139) | 0.147129 / 0.680424 (-0.533295) | 0.017806 / 0.534201 (-0.516395) | 0.422975 / 0.579283 (-0.156308) | 0.430286 / 0.434364 (-0.004078) | 0.501405 / 0.540337 (-0.038932) | 0.596810 / 1.386936 (-0.790126) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-11T07:34:20Z
| 2023-04-26T15:12:25Z
| 2023-04-26T15:05:12Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5729.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5729",
"merged_at": "2023-04-26T15:05:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5729.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5729"
}
|
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements.
Fix #5728.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5729/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4945/events
|
https://github.com/huggingface/datasets/issues/4945
| 1,364,691,096
|
I_kwDODunzps5RV4iY
| 4,945
|
Push to hub can push splits that do not respect the regex
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-09-07T13:45:17Z
| 2022-09-13T10:16:35Z
| 2022-09-13T10:16:35Z
|
MEMBER
| null | null | null |
## Describe the bug
The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.
## Steps to reproduce the bug
```python
>>> from datasets import Dataset, DatasetDict, load_dataset
>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})
>>> di = DatasetDict()
>>> di['identifier-with-column'] = d
>>> di.push_to_hub('open-source-metrics/test')
Pushing split identifier-with-column to the Hub.
Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it]
```
Loading it afterwards:
```python
>>> load_dataset('open-source-metrics/test')
Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s]
Using custom data configuration open-source-metrics--test-28b63ec7cde80488
Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s]
Traceback (most recent call last):
File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
File "<string>", line 5, in __init__
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__
NamedSplit(self.name) # check that it's a valid split name
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__
raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")
ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'.
```
## Expected results
I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.
## Actual results
See above
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4945/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/682
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/682/comments
|
https://api.github.com/repos/huggingface/datasets/issues/682/events
|
https://github.com/huggingface/datasets/pull/682
| 710,325,399
|
MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw
| 682
|
Update navbar chapter titles color
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-09-28T14:35:17Z
| 2020-09-28T17:30:13Z
| 2020-09-28T17:30:12Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/682",
"merged_at": "2020-09-28T17:30:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/682"
}
|
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423
It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.
see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/682/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/682/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6054
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6054/events
|
https://github.com/huggingface/datasets/issues/6054
| 1,813,271,304
|
I_kwDODunzps5sFFMI
| 6,054
|
Multi-processed `Dataset.map` slows down a lot when `import torch`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"A duplicate of https://github.com/huggingface/datasets/issues/5929"
] | 2023-07-20T06:36:14Z
| 2023-07-21T15:19:37Z
| 2023-07-21T15:19:37Z
|
NONE
| null | null | null |
### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5805
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5805/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5805/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5805/events
|
https://github.com/huggingface/datasets/issues/5805
| 1,688,558,577
|
I_kwDODunzps5kpVvx
| 5,805
|
Improve `Create a dataset` tutorial
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
open
| false
| null |
[] | null |
[
"I can work on this. The link to the tutorial seems to be broken though @polinaeterna. ",
"@isunitha98selvan would be great, thank you! which link are you talking about? I think it should work: https://huggingface.co/docs/datasets/create_dataset"
] | 2023-04-28T13:26:22Z
| 2023-06-23T14:58:44Z
| null |
CONTRIBUTOR
| null | null | null |
Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide.
2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data).
Maybe we should actually rethink and restructure this tutorial somehow.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5805/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5805/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6496
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6496/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6496/events
|
https://github.com/huggingface/datasets/issues/6496
| 2,041,589,386
|
I_kwDODunzps55sC6K
| 6,496
|
Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4",
"events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}",
"followers_url": "https://api.github.com/users/GeorgesLorre/followers",
"following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeorgesLorre",
"id": 35808396,
"login": "GeorgesLorre",
"node_id": "MDQ6VXNlcjM1ODA4Mzk2",
"organizations_url": "https://api.github.com/users/GeorgesLorre/orgs",
"received_events_url": "https://api.github.com/users/GeorgesLorre/received_events",
"repos_url": "https://api.github.com/users/GeorgesLorre/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeorgesLorre"
}
|
[] |
open
| false
| null |
[] | null |
[
"I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`."
] | 2023-12-14T11:24:54Z
| 2023-12-14T12:22:21Z
| null |
NONE
| null | null | null |
**Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6496/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6045
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6045/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6045/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6045/events
|
https://github.com/huggingface/datasets/pull/6045
| 1,808,072,270
|
PR_kwDODunzps5Vr-r1
| 6,045
|
Check if column names match in Parquet loader only when config `features` are specified
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006557 / 0.011353 (-0.004796) | 0.004096 / 0.011008 (-0.006913) | 0.083577 / 0.038508 (0.045069) | 0.072092 / 0.023109 (0.048983) | 0.319192 / 0.275898 (0.043294) | 0.351845 / 0.323480 (0.028365) | 0.005475 / 0.007986 (-0.002511) | 0.003419 / 0.004328 (-0.000910) | 0.064562 / 0.004250 (0.060311) | 0.057930 / 0.037052 (0.020878) | 0.326085 / 0.258489 (0.067596) | 0.368316 / 0.293841 (0.074475) | 0.030502 / 0.128546 (-0.098044) | 0.008504 / 0.075646 (-0.067142) | 0.287217 / 0.419271 (-0.132054) | 0.052337 / 0.043533 (0.008804) | 0.319011 / 0.255139 (0.063872) | 0.352711 / 0.283200 (0.069511) | 0.023278 / 0.141683 (-0.118405) | 1.482578 / 1.452155 (0.030423) | 1.553391 / 1.492716 (0.060675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199628 / 0.018006 (0.181622) | 0.464571 / 0.000490 (0.464081) | 0.003512 / 0.000200 (0.003312) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029109 / 0.037411 (-0.008302) | 0.082203 / 0.014526 (0.067677) | 0.096223 / 0.176557 (-0.080333) | 0.155598 / 0.737135 (-0.581537) | 0.097738 / 0.296338 (-0.198600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386135 / 0.215209 (0.170926) | 3.837157 / 2.077655 (1.759502) | 1.836869 / 1.504120 (0.332750) | 1.680592 / 1.541195 (0.139398) | 1.769456 / 1.468490 (0.300966) | 0.493150 / 4.584777 (-4.091627) | 3.589797 / 3.745712 (-0.155915) | 3.330000 / 5.269862 (-1.939861) | 2.059856 / 4.565676 (-2.505821) | 0.057951 / 0.424275 (-0.366324) | 0.007340 / 0.007607 (-0.000267) | 0.463203 / 0.226044 (0.237159) | 4.631514 / 2.268929 (2.362585) | 2.329887 / 55.444624 (-53.114738) | 2.008815 / 6.876477 (-4.867662) | 2.199067 / 2.142072 (0.056995) | 0.591417 / 4.805227 (-4.213810) | 0.137154 / 6.500664 (-6.363510) | 0.061326 / 0.075469 (-0.014143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269676 / 1.841788 (-0.572111) | 19.375167 / 8.074308 (11.300858) | 13.945419 / 10.191392 (3.754027) | 0.146482 / 0.680424 (-0.533942) | 0.018257 / 0.534201 (-0.515944) | 0.391684 / 0.579283 (-0.187599) | 0.411454 / 0.434364 (-0.022910) | 0.466260 / 0.540337 (-0.074077) | 0.655571 / 1.386936 (-0.731365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006619 / 0.011353 (-0.004734) | 0.004102 / 0.011008 (-0.006907) | 0.064848 / 0.038508 (0.026340) | 0.074822 / 0.023109 (0.051713) | 0.366535 / 0.275898 (0.090637) | 0.395873 / 0.323480 (0.072394) | 0.005315 / 0.007986 (-0.002670) | 0.003270 / 0.004328 (-0.001059) | 0.064829 / 0.004250 (0.060578) | 0.056094 / 0.037052 (0.019042) | 0.370355 / 0.258489 (0.111866) | 0.406837 / 0.293841 (0.112996) | 0.031634 / 0.128546 (-0.096912) | 0.008569 / 0.075646 (-0.067077) | 0.071126 / 0.419271 (-0.348145) | 0.048629 / 0.043533 (0.005096) | 0.365175 / 0.255139 (0.110036) | 0.385234 / 0.283200 (0.102034) | 0.023295 / 0.141683 (-0.118388) | 1.466907 / 1.452155 (0.014752) | 1.523118 / 1.492716 (0.030401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227872 / 0.018006 (0.209866) | 0.451573 / 0.000490 (0.451083) | 0.000379 / 0.000200 (0.000179) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029496 / 0.037411 (-0.007915) | 0.086614 / 0.014526 (0.072088) | 0.098165 / 0.176557 (-0.078392) | 0.152218 / 0.737135 (-0.584917) | 0.101215 / 0.296338 (-0.195123) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407519 / 0.215209 (0.192310) | 4.074704 / 2.077655 (1.997049) | 2.113185 / 1.504120 (0.609065) | 1.947461 / 1.541195 (0.406266) | 1.998521 / 1.468490 (0.530031) | 0.487463 / 4.584777 (-4.097313) | 3.465423 / 3.745712 (-0.280289) | 3.376498 / 5.269862 (-1.893363) | 2.001533 / 4.565676 (-2.564144) | 0.057052 / 0.424275 (-0.367223) | 0.007325 / 0.007607 (-0.000283) | 0.485648 / 0.226044 (0.259604) | 4.860191 / 2.268929 (2.591262) | 2.550340 / 55.444624 (-52.894284) | 2.231136 / 6.876477 (-4.645341) | 2.262539 / 2.142072 (0.120467) | 0.591422 / 4.805227 (-4.213805) | 0.132875 / 6.500664 (-6.367789) | 0.062154 / 0.075469 (-0.013315) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.321834 / 1.841788 (-0.519954) | 19.734750 / 8.074308 (11.660442) | 14.681049 / 10.191392 (4.489657) | 0.148894 / 0.680424 (-0.531530) | 0.018414 / 0.534201 (-0.515787) | 0.393377 / 0.579283 (-0.185906) | 0.402795 / 0.434364 (-0.031569) | 0.478624 / 0.540337 (-0.061714) | 0.656767 / 1.386936 (-0.730169) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007012 / 0.011353 (-0.004341) | 0.004120 / 0.011008 (-0.006888) | 0.083720 / 0.038508 (0.045212) | 0.083105 / 0.023109 (0.059996) | 0.323803 / 0.275898 (0.047905) | 0.340345 / 0.323480 (0.016865) | 0.005872 / 0.007986 (-0.002113) | 0.003528 / 0.004328 (-0.000801) | 0.065185 / 0.004250 (0.060935) | 0.063092 / 0.037052 (0.026040) | 0.314900 / 0.258489 (0.056411) | 0.349251 / 0.293841 (0.055410) | 0.031612 / 0.128546 (-0.096934) | 0.008541 / 0.075646 (-0.067105) | 0.289865 / 0.419271 (-0.129407) | 0.055264 / 0.043533 (0.011731) | 0.309152 / 0.255139 (0.054013) | 0.332625 / 0.283200 (0.049425) | 0.024306 / 0.141683 (-0.117377) | 1.489191 / 1.452155 (0.037037) | 1.562447 / 1.492716 (0.069731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236681 / 0.018006 (0.218675) | 0.567767 / 0.000490 (0.567277) | 0.003022 / 0.000200 (0.002822) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028698 / 0.037411 (-0.008714) | 0.081681 / 0.014526 (0.067155) | 0.099109 / 0.176557 (-0.077447) | 0.154381 / 0.737135 (-0.582754) | 0.098691 / 0.296338 (-0.197648) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397985 / 0.215209 (0.182776) | 3.962499 / 2.077655 (1.884844) | 1.936158 / 1.504120 (0.432038) | 1.762339 / 1.541195 (0.221144) | 1.837451 / 1.468490 (0.368961) | 0.485655 / 4.584777 (-4.099122) | 3.538341 / 3.745712 (-0.207371) | 5.110095 / 5.269862 (-0.159767) | 3.066152 / 4.565676 (-1.499524) | 0.057505 / 0.424275 (-0.366770) | 0.007334 / 0.007607 (-0.000273) | 0.475622 / 0.226044 (0.249578) | 4.754091 / 2.268929 (2.485162) | 2.431379 / 55.444624 (-53.013246) | 2.106178 / 6.876477 (-4.770298) | 2.364305 / 2.142072 (0.222232) | 0.614038 / 4.805227 (-4.191190) | 0.148530 / 6.500664 (-6.352134) | 0.061033 / 0.075469 (-0.014436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.242345 / 1.841788 (-0.599443) | 19.017266 / 8.074308 (10.942958) | 13.477782 / 10.191392 (3.286390) | 0.158513 / 0.680424 (-0.521911) | 0.018757 / 0.534201 (-0.515444) | 0.393773 / 0.579283 (-0.185510) | 0.416933 / 0.434364 (-0.017431) | 0.460012 / 0.540337 (-0.080326) | 0.637010 / 1.386936 (-0.749926) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006689 / 0.011353 (-0.004664) | 0.004168 / 0.011008 (-0.006840) | 0.065009 / 0.038508 (0.026501) | 0.073766 / 0.023109 (0.050657) | 0.369585 / 0.275898 (0.093687) | 0.407945 / 0.323480 (0.084465) | 0.005583 / 0.007986 (-0.002403) | 0.003494 / 0.004328 (-0.000835) | 0.065032 / 0.004250 (0.060782) | 0.057166 / 0.037052 (0.020114) | 0.370656 / 0.258489 (0.112166) | 0.428381 / 0.293841 (0.134540) | 0.031653 / 0.128546 (-0.096893) | 0.008731 / 0.075646 (-0.066915) | 0.071624 / 0.419271 (-0.347648) | 0.049364 / 0.043533 (0.005832) | 0.361824 / 0.255139 (0.106685) | 0.387615 / 0.283200 (0.104415) | 0.023228 / 0.141683 (-0.118455) | 1.476204 / 1.452155 (0.024049) | 1.553522 / 1.492716 (0.060806) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266955 / 0.018006 (0.248948) | 0.556566 / 0.000490 (0.556076) | 0.000399 / 0.000200 (0.000199) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033104 / 0.037411 (-0.004307) | 0.088067 / 0.014526 (0.073541) | 0.103333 / 0.176557 (-0.073224) | 0.157061 / 0.737135 (-0.580074) | 0.105007 / 0.296338 (-0.191331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420826 / 0.215209 (0.205617) | 4.201656 / 2.077655 (2.124001) | 2.208336 / 1.504120 (0.704216) | 2.043780 / 1.541195 (0.502585) | 2.156215 / 1.468490 (0.687725) | 0.490485 / 4.584777 (-4.094292) | 3.611446 / 3.745712 (-0.134267) | 5.293140 / 5.269862 (0.023279) | 2.739778 / 4.565676 (-1.825899) | 0.058175 / 0.424275 (-0.366100) | 0.007633 / 0.007607 (0.000026) | 0.500773 / 0.226044 (0.274729) | 5.000900 / 2.268929 (2.731971) | 2.721200 / 55.444624 (-52.723424) | 2.349381 / 6.876477 (-4.527095) | 2.386261 / 2.142072 (0.244188) | 0.583174 / 4.805227 (-4.222053) | 0.134558 / 6.500664 (-6.366106) | 0.062157 / 0.075469 (-0.013312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351087 / 1.841788 (-0.490701) | 20.305703 / 8.074308 (12.231395) | 14.548518 / 10.191392 (4.357126) | 0.173720 / 0.680424 (-0.506704) | 0.018100 / 0.534201 (-0.516101) | 0.395187 / 0.579283 (-0.184097) | 0.414619 / 0.434364 (-0.019745) | 0.462515 / 0.540337 (-0.077823) | 0.617822 / 1.386936 (-0.769114) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006909 / 0.011353 (-0.004444) | 0.003954 / 0.011008 (-0.007054) | 0.084329 / 0.038508 (0.045821) | 0.074919 / 0.023109 (0.051809) | 0.319350 / 0.275898 (0.043451) | 0.347264 / 0.323480 (0.023785) | 0.005326 / 0.007986 (-0.002660) | 0.003323 / 0.004328 (-0.001006) | 0.064286 / 0.004250 (0.060036) | 0.054748 / 0.037052 (0.017696) | 0.324784 / 0.258489 (0.066295) | 0.361445 / 0.293841 (0.067605) | 0.031239 / 0.128546 (-0.097308) | 0.008361 / 0.075646 (-0.067286) | 0.287482 / 0.419271 (-0.131789) | 0.052093 / 0.043533 (0.008560) | 0.321454 / 0.255139 (0.066315) | 0.337999 / 0.283200 (0.054800) | 0.025807 / 0.141683 (-0.115876) | 1.501838 / 1.452155 (0.049683) | 1.574484 / 1.492716 (0.081767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193220 / 0.018006 (0.175214) | 0.448105 / 0.000490 (0.447615) | 0.002949 / 0.000200 (0.002749) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028517 / 0.037411 (-0.008894) | 0.087281 / 0.014526 (0.072755) | 0.098295 / 0.176557 (-0.078262) | 0.156972 / 0.737135 (-0.580163) | 0.101250 / 0.296338 (-0.195088) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383734 / 0.215209 (0.168525) | 3.821293 / 2.077655 (1.743638) | 1.866487 / 1.504120 (0.362367) | 1.722195 / 1.541195 (0.181000) | 1.843762 / 1.468490 (0.375272) | 0.484813 / 4.584777 (-4.099964) | 3.535381 / 3.745712 (-0.210331) | 5.502338 / 5.269862 (0.232477) | 3.256078 / 4.565676 (-1.309599) | 0.057312 / 0.424275 (-0.366963) | 0.007305 / 0.007607 (-0.000302) | 0.461523 / 0.226044 (0.235479) | 4.611828 / 2.268929 (2.342899) | 2.337180 / 55.444624 (-53.107445) | 2.040956 / 6.876477 (-4.835521) | 2.241233 / 2.142072 (0.099160) | 0.583727 / 4.805227 (-4.221500) | 0.132427 / 6.500664 (-6.368237) | 0.060306 / 0.075469 (-0.015163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282223 / 1.841788 (-0.559565) | 19.439745 / 8.074308 (11.365437) | 13.627657 / 10.191392 (3.436265) | 0.158975 / 0.680424 (-0.521449) | 0.018599 / 0.534201 (-0.515601) | 0.391136 / 0.579283 (-0.188147) | 0.410947 / 0.434364 (-0.023417) | 0.453889 / 0.540337 (-0.086448) | 0.620928 / 1.386936 (-0.766008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006428 / 0.011353 (-0.004925) | 0.003980 / 0.011008 (-0.007028) | 0.065006 / 0.038508 (0.026498) | 0.076541 / 0.023109 (0.053432) | 0.358518 / 0.275898 (0.082620) | 0.394397 / 0.323480 (0.070917) | 0.005845 / 0.007986 (-0.002140) | 0.003258 / 0.004328 (-0.001071) | 0.064436 / 0.004250 (0.060186) | 0.056691 / 0.037052 (0.019639) | 0.367369 / 0.258489 (0.108880) | 0.420345 / 0.293841 (0.126504) | 0.031047 / 0.128546 (-0.097499) | 0.008430 / 0.075646 (-0.067216) | 0.071280 / 0.419271 (-0.347991) | 0.048872 / 0.043533 (0.005339) | 0.360073 / 0.255139 (0.104934) | 0.384150 / 0.283200 (0.100951) | 0.023189 / 0.141683 (-0.118494) | 1.500251 / 1.452155 (0.048096) | 1.545910 / 1.492716 (0.053194) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224861 / 0.018006 (0.206855) | 0.439901 / 0.000490 (0.439411) | 0.000372 / 0.000200 (0.000172) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029914 / 0.037411 (-0.007497) | 0.086916 / 0.014526 (0.072390) | 0.099527 / 0.176557 (-0.077029) | 0.153031 / 0.737135 (-0.584104) | 0.100008 / 0.296338 (-0.196330) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420305 / 0.215209 (0.205096) | 4.198224 / 2.077655 (2.120569) | 2.223807 / 1.504120 (0.719687) | 2.058475 / 1.541195 (0.517280) | 2.140405 / 1.468490 (0.671915) | 0.481224 / 4.584777 (-4.103553) | 3.593767 / 3.745712 (-0.151945) | 5.536710 / 5.269862 (0.266849) | 3.162048 / 4.565676 (-1.403629) | 0.056662 / 0.424275 (-0.367614) | 0.007301 / 0.007607 (-0.000306) | 0.507494 / 0.226044 (0.281450) | 5.047824 / 2.268929 (2.778896) | 2.715167 / 55.444624 (-52.729458) | 2.334916 / 6.876477 (-4.541560) | 2.406615 / 2.142072 (0.264543) | 0.572761 / 4.805227 (-4.232466) | 0.131248 / 6.500664 (-6.369416) | 0.062401 / 0.075469 (-0.013068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375896 / 1.841788 (-0.465892) | 19.836638 / 8.074308 (11.762329) | 14.246645 / 10.191392 (4.055253) | 0.164975 / 0.680424 (-0.515449) | 0.018293 / 0.534201 (-0.515908) | 0.394196 / 0.579283 (-0.185087) | 0.405895 / 0.434364 (-0.028469) | 0.459221 / 0.540337 (-0.081116) | 0.609898 / 1.386936 (-0.777038) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008463 / 0.011353 (-0.002890) | 0.004754 / 0.011008 (-0.006254) | 0.103574 / 0.038508 (0.065066) | 0.083541 / 0.023109 (0.060432) | 0.402498 / 0.275898 (0.126600) | 0.434944 / 0.323480 (0.111465) | 0.005766 / 0.007986 (-0.002219) | 0.003823 / 0.004328 (-0.000505) | 0.078433 / 0.004250 (0.074183) | 0.056948 / 0.037052 (0.019895) | 0.392539 / 0.258489 (0.134050) | 0.447226 / 0.293841 (0.153385) | 0.045845 / 0.128546 (-0.082701) | 0.014043 / 0.075646 (-0.061603) | 0.355768 / 0.419271 (-0.063503) | 0.065492 / 0.043533 (0.021960) | 0.408047 / 0.255139 (0.152908) | 0.468313 / 0.283200 (0.185113) | 0.033779 / 0.141683 (-0.107904) | 1.772198 / 1.452155 (0.320043) | 1.889127 / 1.492716 (0.396411) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207107 / 0.018006 (0.189101) | 0.533261 / 0.000490 (0.532771) | 0.000864 / 0.000200 (0.000664) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032139 / 0.037411 (-0.005272) | 0.102002 / 0.014526 (0.087476) | 0.108780 / 0.176557 (-0.067777) | 0.202857 / 0.737135 (-0.534278) | 0.110378 / 0.296338 (-0.185960) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582814 / 0.215209 (0.367605) | 5.870683 / 2.077655 (3.793028) | 2.510290 / 1.504120 (1.006171) | 2.146337 / 1.541195 (0.605142) | 2.239278 / 1.468490 (0.770788) | 0.861205 / 4.584777 (-3.723572) | 5.177394 / 3.745712 (1.431682) | 8.550713 / 5.269862 (3.280852) | 4.867715 / 4.565676 (0.302038) | 0.096665 / 0.424275 (-0.327610) | 0.008702 / 0.007607 (0.001095) | 0.748908 / 0.226044 (0.522863) | 7.302815 / 2.268929 (5.033887) | 3.205045 / 55.444624 (-52.239580) | 2.743914 / 6.876477 (-4.132562) | 2.831240 / 2.142072 (0.689167) | 1.103912 / 4.805227 (-3.701315) | 0.246075 / 6.500664 (-6.254589) | 0.092092 / 0.075469 (0.016623) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.591331 / 1.841788 (-0.250457) | 23.085848 / 8.074308 (15.011540) | 22.887963 / 10.191392 (12.696571) | 0.212735 / 0.680424 (-0.467689) | 0.027400 / 0.534201 (-0.506801) | 0.493822 / 0.579283 (-0.085461) | 0.574485 / 0.434364 (0.140121) | 0.574873 / 0.540337 (0.034536) | 0.826178 / 1.386936 (-0.560758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009155 / 0.011353 (-0.002198) | 0.004976 / 0.011008 (-0.006032) | 0.079308 / 0.038508 (0.040799) | 0.093959 / 0.023109 (0.070850) | 0.449110 / 0.275898 (0.173212) | 0.493356 / 0.323480 (0.169876) | 0.006317 / 0.007986 (-0.001669) | 0.004179 / 0.004328 (-0.000150) | 0.076991 / 0.004250 (0.072740) | 0.061977 / 0.037052 (0.024924) | 0.493823 / 0.258489 (0.235333) | 0.491609 / 0.293841 (0.197768) | 0.049552 / 0.128546 (-0.078994) | 0.015174 / 0.075646 (-0.060472) | 0.090431 / 0.419271 (-0.328841) | 0.061597 / 0.043533 (0.018064) | 0.467672 / 0.255139 (0.212533) | 0.490542 / 0.283200 (0.207342) | 0.035048 / 0.141683 (-0.106635) | 1.807939 / 1.452155 (0.355784) | 1.854859 / 1.492716 (0.362142) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236672 / 0.018006 (0.218666) | 0.542236 / 0.000490 (0.541746) | 0.016334 / 0.000200 (0.016134) | 0.000220 / 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032051 / 0.037411 (-0.005360) | 0.115352 / 0.014526 (0.100826) | 0.125115 / 0.176557 (-0.051441) | 0.173670 / 0.737135 (-0.563466) | 0.117832 / 0.296338 (-0.178507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631513 / 0.215209 (0.416304) | 6.371688 / 2.077655 (4.294033) | 2.867240 / 1.504120 (1.363120) | 2.454907 / 1.541195 (0.913713) | 2.518860 / 1.468490 (1.050370) | 0.879973 / 4.584777 (-3.704804) | 5.170263 / 3.745712 (1.424551) | 7.986429 / 5.269862 (2.716567) | 4.828095 / 4.565676 (0.262418) | 0.097808 / 0.424275 (-0.326468) | 0.010541 / 0.007607 (0.002934) | 0.745601 / 0.226044 (0.519557) | 7.631683 / 2.268929 (5.362755) | 3.524255 / 55.444624 (-51.920369) | 2.866199 / 6.876477 (-4.010278) | 2.982483 / 2.142072 (0.840410) | 1.148957 / 4.805227 (-3.656270) | 0.217067 / 6.500664 (-6.283598) | 0.074357 / 0.075469 (-0.001112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714917 / 1.841788 (-0.126871) | 24.151348 / 8.074308 (16.077040) | 21.993604 / 10.191392 (11.802212) | 0.234883 / 0.680424 (-0.445541) | 0.028182 / 0.534201 (-0.506019) | 0.474050 / 0.579283 (-0.105233) | 0.557012 / 0.434364 (0.122648) | 0.537823 / 0.540337 (-0.002514) | 0.741488 / 1.386936 (-0.645448) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007640 / 0.011353 (-0.003713) | 0.004776 / 0.011008 (-0.006232) | 0.101582 / 0.038508 (0.063074) | 0.085113 / 0.023109 (0.062003) | 0.376000 / 0.275898 (0.100102) | 0.421117 / 0.323480 (0.097637) | 0.006095 / 0.007986 (-0.001891) | 0.003884 / 0.004328 (-0.000445) | 0.077263 / 0.004250 (0.073013) | 0.065262 / 0.037052 (0.028210) | 0.384041 / 0.258489 (0.125552) | 0.442229 / 0.293841 (0.148388) | 0.035706 / 0.128546 (-0.092840) | 0.009996 / 0.075646 (-0.065651) | 0.344925 / 0.419271 (-0.074346) | 0.062358 / 0.043533 (0.018825) | 0.371738 / 0.255139 (0.116599) | 0.407093 / 0.283200 (0.123894) | 0.026996 / 0.141683 (-0.114687) | 1.762705 / 1.452155 (0.310550) | 1.846777 / 1.492716 (0.354061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219660 / 0.018006 (0.201653) | 0.521795 / 0.000490 (0.521305) | 0.005344 / 0.000200 (0.005145) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036027 / 0.037411 (-0.001385) | 0.100309 / 0.014526 (0.085784) | 0.113041 / 0.176557 (-0.063515) | 0.190037 / 0.737135 (-0.547099) | 0.114552 / 0.296338 (-0.181786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466364 / 0.215209 (0.251154) | 4.638745 / 2.077655 (2.561090) | 2.317875 / 1.504120 (0.813755) | 2.099241 / 1.541195 (0.558046) | 2.149827 / 1.468490 (0.681337) | 0.578913 / 4.584777 (-4.005864) | 4.281866 / 3.745712 (0.536154) | 3.778453 / 5.269862 (-1.491408) | 2.411704 / 4.565676 (-2.153972) | 0.068556 / 0.424275 (-0.355719) | 0.008779 / 0.007607 (0.001172) | 0.553165 / 0.226044 (0.327121) | 5.524520 / 2.268929 (3.255591) | 2.848444 / 55.444624 (-52.596181) | 2.468591 / 6.876477 (-4.407885) | 2.652117 / 2.142072 (0.510045) | 0.694124 / 4.805227 (-4.111103) | 0.157087 / 6.500664 (-6.343577) | 0.070706 / 0.075469 (-0.004763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.492031 / 1.841788 (-0.349757) | 23.086596 / 8.074308 (15.012288) | 16.791351 / 10.191392 (6.599959) | 0.203932 / 0.680424 (-0.476492) | 0.021736 / 0.534201 (-0.512464) | 0.468344 / 0.579283 (-0.110939) | 0.493790 / 0.434364 (0.059426) | 0.563226 / 0.540337 (0.022889) | 0.780384 / 1.386936 (-0.606553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.004696 / 0.011008 (-0.006312) | 0.076712 / 0.038508 (0.038204) | 0.095915 / 0.023109 (0.072805) | 0.433615 / 0.275898 (0.157717) | 0.482477 / 0.323480 (0.158997) | 0.007029 / 0.007986 (-0.000957) | 0.003842 / 0.004328 (-0.000487) | 0.076331 / 0.004250 (0.072081) | 0.069755 / 0.037052 (0.032703) | 0.458914 / 0.258489 (0.200425) | 0.486155 / 0.293841 (0.192314) | 0.036966 / 0.128546 (-0.091580) | 0.010082 / 0.075646 (-0.065564) | 0.083886 / 0.419271 (-0.335385) | 0.059329 / 0.043533 (0.015796) | 0.453782 / 0.255139 (0.198643) | 0.459508 / 0.283200 (0.176308) | 0.028400 / 0.141683 (-0.113283) | 1.796406 / 1.452155 (0.344251) | 1.881161 / 1.492716 (0.388445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235053 / 0.018006 (0.217047) | 0.501907 / 0.000490 (0.501417) | 0.005211 / 0.000200 (0.005011) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037752 / 0.037411 (0.000341) | 0.107299 / 0.014526 (0.092773) | 0.120307 / 0.176557 (-0.056250) | 0.187542 / 0.737135 (-0.549593) | 0.121805 / 0.296338 (-0.174533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490039 / 0.215209 (0.274830) | 4.919169 / 2.077655 (2.841515) | 2.520610 / 1.504120 (1.016490) | 2.324473 / 1.541195 (0.783279) | 2.421195 / 1.468490 (0.952705) | 0.576314 / 4.584777 (-4.008463) | 4.304752 / 3.745712 (0.559040) | 3.881151 / 5.269862 (-1.388710) | 2.409777 / 4.565676 (-2.155900) | 0.067400 / 0.424275 (-0.356875) | 0.009235 / 0.007607 (0.001627) | 0.586601 / 0.226044 (0.360556) | 5.850080 / 2.268929 (3.581152) | 3.064859 / 55.444624 (-52.379766) | 2.701734 / 6.876477 (-4.174743) | 2.926190 / 2.142072 (0.784117) | 0.698511 / 4.805227 (-4.106716) | 0.158273 / 6.500664 (-6.342392) | 0.074530 / 0.075469 (-0.000939) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.607113 / 1.841788 (-0.234674) | 23.499279 / 8.074308 (15.424971) | 17.049509 / 10.191392 (6.858117) | 0.175689 / 0.680424 (-0.504735) | 0.021762 / 0.534201 (-0.512439) | 0.491450 / 0.579283 (-0.087833) | 0.487557 / 0.434364 (0.053193) | 0.570104 / 0.540337 (0.029766) | 0.761527 / 1.386936 (-0.625409) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008725 / 0.011353 (-0.002628) | 0.005156 / 0.011008 (-0.005852) | 0.095147 / 0.038508 (0.056639) | 0.084916 / 0.023109 (0.061807) | 0.390769 / 0.275898 (0.114871) | 0.434716 / 0.323480 (0.111237) | 0.005982 / 0.007986 (-0.002004) | 0.004323 / 0.004328 (-0.000006) | 0.074712 / 0.004250 (0.070461) | 0.058889 / 0.037052 (0.021837) | 0.403997 / 0.258489 (0.145508) | 0.443361 / 0.293841 (0.149520) | 0.045908 / 0.128546 (-0.082639) | 0.013562 / 0.075646 (-0.062085) | 0.330683 / 0.419271 (-0.088588) | 0.064821 / 0.043533 (0.021288) | 0.407202 / 0.255139 (0.152063) | 0.409930 / 0.283200 (0.126730) | 0.032693 / 0.141683 (-0.108990) | 1.630181 / 1.452155 (0.178026) | 1.729680 / 1.492716 (0.236963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261240 / 0.018006 (0.243234) | 0.581850 / 0.000490 (0.581360) | 0.002997 / 0.000200 (0.002797) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029279 / 0.037411 (-0.008133) | 0.085004 / 0.014526 (0.070478) | 0.127782 / 0.176557 (-0.048774) | 0.168852 / 0.737135 (-0.568283) | 0.098697 / 0.296338 (-0.197641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.546417 / 0.215209 (0.331208) | 5.602186 / 2.077655 (3.524531) | 2.597049 / 1.504120 (1.092930) | 2.384880 / 1.541195 (0.843685) | 2.444516 / 1.468490 (0.976026) | 0.796562 / 4.584777 (-3.788214) | 5.239440 / 3.745712 (1.493727) | 7.087768 / 5.269862 (1.817906) | 4.308476 / 4.565676 (-0.257200) | 0.091215 / 0.424275 (-0.333060) | 0.007942 / 0.007607 (0.000335) | 0.690059 / 0.226044 (0.464015) | 6.727809 / 2.268929 (4.458880) | 3.294522 / 55.444624 (-52.150103) | 2.604088 / 6.876477 (-4.272389) | 2.786970 / 2.142072 (0.644898) | 0.918817 / 4.805227 (-3.886410) | 0.191451 / 6.500664 (-6.309213) | 0.069557 / 0.075469 (-0.005912) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.486377 / 1.841788 (-0.355411) | 22.363470 / 8.074308 (14.289162) | 19.963684 / 10.191392 (9.772292) | 0.204161 / 0.680424 (-0.476263) | 0.034570 / 0.534201 (-0.499631) | 0.467937 / 0.579283 (-0.111346) | 0.564870 / 0.434364 (0.130506) | 0.511133 / 0.540337 (-0.029204) | 0.777084 / 1.386936 (-0.609852) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008612 / 0.011353 (-0.002741) | 0.004993 / 0.011008 (-0.006015) | 0.080769 / 0.038508 (0.042261) | 0.075923 / 0.023109 (0.052814) | 0.442271 / 0.275898 (0.166373) | 0.495625 / 0.323480 (0.172146) | 0.006467 / 0.007986 (-0.001518) | 0.004001 / 0.004328 (-0.000328) | 0.077309 / 0.004250 (0.073059) | 0.063466 / 0.037052 (0.026414) | 0.452460 / 0.258489 (0.193971) | 0.494063 / 0.293841 (0.200223) | 0.045751 / 0.128546 (-0.082796) | 0.013402 / 0.075646 (-0.062245) | 0.085760 / 0.419271 (-0.333511) | 0.056532 / 0.043533 (0.012999) | 0.440596 / 0.255139 (0.185457) | 0.459540 / 0.283200 (0.176340) | 0.035897 / 0.141683 (-0.105786) | 1.728264 / 1.452155 (0.276109) | 1.808142 / 1.492716 (0.315426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285094 / 0.018006 (0.267088) | 0.598440 / 0.000490 (0.597950) | 0.003476 / 0.000200 (0.003276) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035106 / 0.037411 (-0.002305) | 0.091724 / 0.014526 (0.077198) | 0.122803 / 0.176557 (-0.053754) | 0.182114 / 0.737135 (-0.555022) | 0.116196 / 0.296338 (-0.180143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.585420 / 0.215209 (0.370211) | 5.790370 / 2.077655 (3.712715) | 2.833247 / 1.504120 (1.329127) | 2.627949 / 1.541195 (1.086755) | 2.643050 / 1.468490 (1.174560) | 0.792036 / 4.584777 (-3.792741) | 5.145084 / 3.745712 (1.399372) | 4.423679 / 5.269862 (-0.846182) | 2.802778 / 4.565676 (-1.762898) | 0.093983 / 0.424275 (-0.330292) | 0.009260 / 0.007607 (0.001652) | 0.720302 / 0.226044 (0.494258) | 7.116959 / 2.268929 (4.848031) | 3.574782 / 55.444624 (-51.869843) | 3.009330 / 6.876477 (-3.867147) | 3.126488 / 2.142072 (0.984415) | 0.949144 / 4.805227 (-3.856083) | 0.195143 / 6.500664 (-6.305521) | 0.072490 / 0.075469 (-0.002979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.626368 / 1.841788 (-0.215419) | 23.683021 / 8.074308 (15.608713) | 20.085297 / 10.191392 (9.893905) | 0.267057 / 0.680424 (-0.413367) | 0.028306 / 0.534201 (-0.505894) | 0.478448 / 0.579283 (-0.100835) | 0.597619 / 0.434364 (0.163256) | 0.544737 / 0.540337 (0.004399) | 0.761805 / 1.386936 (-0.625131) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009359 / 0.011353 (-0.001994) | 0.004848 / 0.011008 (-0.006160) | 0.099471 / 0.038508 (0.060963) | 0.079483 / 0.023109 (0.056373) | 0.375281 / 0.275898 (0.099383) | 0.415566 / 0.323480 (0.092086) | 0.006317 / 0.007986 (-0.001669) | 0.005145 / 0.004328 (0.000817) | 0.080345 / 0.004250 (0.076094) | 0.064540 / 0.037052 (0.027487) | 0.385897 / 0.258489 (0.127408) | 0.432576 / 0.293841 (0.138735) | 0.055109 / 0.128546 (-0.073437) | 0.014166 / 0.075646 (-0.061480) | 0.350870 / 0.419271 (-0.068402) | 0.087483 / 0.043533 (0.043950) | 0.402288 / 0.255139 (0.147149) | 0.391997 / 0.283200 (0.108798) | 0.045233 / 0.141683 (-0.096450) | 1.795002 / 1.452155 (0.342847) | 1.839063 / 1.492716 (0.346347) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220851 / 0.018006 (0.202845) | 0.513391 / 0.000490 (0.512901) | 0.003740 / 0.000200 (0.003540) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035287 / 0.037411 (-0.002124) | 0.090670 / 0.014526 (0.076144) | 0.115651 / 0.176557 (-0.060905) | 0.180469 / 0.737135 (-0.556667) | 0.106955 / 0.296338 (-0.189384) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.632381 / 0.215209 (0.417172) | 6.185151 / 2.077655 (4.107497) | 2.548263 / 1.504120 (1.044143) | 2.194931 / 1.541195 (0.653737) | 2.368685 / 1.468490 (0.900194) | 0.956467 / 4.584777 (-3.628310) | 5.280904 / 3.745712 (1.535192) | 4.783057 / 5.269862 (-0.486805) | 3.218493 / 4.565676 (-1.347184) | 0.103545 / 0.424275 (-0.320730) | 0.008424 / 0.007607 (0.000817) | 0.736303 / 0.226044 (0.510259) | 7.354305 / 2.268929 (5.085376) | 3.280670 / 55.444624 (-52.163954) | 2.478628 / 6.876477 (-4.397848) | 2.623290 / 2.142072 (0.481217) | 1.033064 / 4.805227 (-3.772163) | 0.206496 / 6.500664 (-6.294168) | 0.066449 / 0.075469 (-0.009020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.508756 / 1.841788 (-0.333031) | 21.866012 / 8.074308 (13.791704) | 21.887761 / 10.191392 (11.696369) | 0.231415 / 0.680424 (-0.449008) | 0.028917 / 0.534201 (-0.505284) | 0.468761 / 0.579283 (-0.110522) | 0.568236 / 0.434364 (0.133872) | 0.550156 / 0.540337 (0.009818) | 0.783197 / 1.386936 (-0.603739) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009413 / 0.011353 (-0.001939) | 0.004951 / 0.011008 (-0.006058) | 0.071402 / 0.038508 (0.032893) | 0.068455 / 0.023109 (0.045346) | 0.425216 / 0.275898 (0.149318) | 0.431928 / 0.323480 (0.108448) | 0.006477 / 0.007986 (-0.001509) | 0.003891 / 0.004328 (-0.000437) | 0.076898 / 0.004250 (0.072647) | 0.057522 / 0.037052 (0.020470) | 0.449585 / 0.258489 (0.191096) | 0.431356 / 0.293841 (0.137515) | 0.049728 / 0.128546 (-0.078818) | 0.014456 / 0.075646 (-0.061190) | 0.084618 / 0.419271 (-0.334653) | 0.064482 / 0.043533 (0.020949) | 0.456377 / 0.255139 (0.201238) | 0.433949 / 0.283200 (0.150749) | 0.036577 / 0.141683 (-0.105106) | 1.819742 / 1.452155 (0.367588) | 1.694691 / 1.492716 (0.201975) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224610 / 0.018006 (0.206604) | 0.494586 / 0.000490 (0.494096) | 0.004506 / 0.000200 (0.004307) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033172 / 0.037411 (-0.004239) | 0.100562 / 0.014526 (0.086036) | 0.116499 / 0.176557 (-0.060058) | 0.153717 / 0.737135 (-0.583418) | 0.140047 / 0.296338 (-0.156291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635922 / 0.215209 (0.420713) | 6.359792 / 2.077655 (4.282137) | 2.689083 / 1.504120 (1.184963) | 2.330574 / 1.541195 (0.789380) | 2.583535 / 1.468490 (1.115044) | 0.902737 / 4.584777 (-3.682040) | 5.136586 / 3.745712 (1.390874) | 4.570824 / 5.269862 (-0.699037) | 3.029953 / 4.565676 (-1.535724) | 0.103961 / 0.424275 (-0.320314) | 0.007908 / 0.007607 (0.000301) | 0.723290 / 0.226044 (0.497246) | 7.678599 / 2.268929 (5.409671) | 3.342522 / 55.444624 (-52.102102) | 2.774659 / 6.876477 (-4.101817) | 2.966496 / 2.142072 (0.824423) | 1.025395 / 4.805227 (-3.779832) | 0.222246 / 6.500664 (-6.278418) | 0.072455 / 0.075469 (-0.003014) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.603637 / 1.841788 (-0.238151) | 21.387722 / 8.074308 (13.313414) | 22.855221 / 10.191392 (12.663829) | 0.222147 / 0.680424 (-0.458277) | 0.030763 / 0.534201 (-0.503438) | 0.472586 / 0.579283 (-0.106697) | 0.560161 / 0.434364 (0.125797) | 0.551941 / 0.540337 (0.011604) | 0.711254 / 1.386936 (-0.675682) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-17T15:50:15Z
| 2023-07-24T14:45:56Z
| 2023-07-24T14:35:03Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6045",
"merged_at": "2023-07-24T14:35:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6045"
}
|
Fix #6039
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6045/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6045/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4699/events
|
https://github.com/huggingface/datasets/pull/4699
| 1,307,555,592
|
PR_kwDODunzps47jA6Z
| 4,699
|
Fix Authentification Error while streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37480967?v=4",
"events_url": "https://api.github.com/users/hkjeon13/events{/privacy}",
"followers_url": "https://api.github.com/users/hkjeon13/followers",
"following_url": "https://api.github.com/users/hkjeon13/following{/other_user}",
"gists_url": "https://api.github.com/users/hkjeon13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hkjeon13",
"id": 37480967,
"login": "hkjeon13",
"node_id": "MDQ6VXNlcjM3NDgwOTY3",
"organizations_url": "https://api.github.com/users/hkjeon13/orgs",
"received_events_url": "https://api.github.com/users/hkjeon13/received_events",
"repos_url": "https://api.github.com/users/hkjeon13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hkjeon13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hkjeon13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hkjeon13"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi, thanks for working on this, but the fix for this has already been merged in https://github.com/huggingface/datasets/pull/4608."
] | 2022-07-18T08:03:41Z
| 2022-07-20T13:10:44Z
| 2022-07-20T13:10:43Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4699.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4699",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4699.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4699"
}
|
I fixed a few errors when it occurs while streaming the private dataset on the Huggingface Hub.
```
from datasets import load_dataset
dataset = load_dataset(<repo_id>, use_auth_token=<private_token>, streaming=True)
for d in dataset['train']:
print(d)
break # this is for checking
```
This code is an example for streaming private datasets.
when the version of the datasets is 2.2.2, it works well but datasets>2.2.2 occurs error like this,
```
/usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py in raise_for_status(self)
1007 status=self.status,
1008 message=self.reason,
→ 1009 headers=self.headers,
1010 )
1011
ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/.../train-00000-of-00001-168b451062c67c34.parquet')
```
(this is an example on the dataset has `parquet` extenstion)
It seems that the `xisfile `module in `download/streaming_download_manager.py` couldn't recognize the file on "https://huggingface.co/~".
so I add three lines.
With this change, there is no error anymore(but this code is ad-hoc).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4699/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4699/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/776
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/776/comments
|
https://api.github.com/repos/huggingface/datasets/issues/776/events
|
https://github.com/huggingface/datasets/pull/776
| 732,343,550
|
MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx
| 776
|
Allow custom split names in text dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!"
] | 2020-10-29T14:04:06Z
| 2020-10-30T13:46:45Z
| 2020-10-30T13:23:52Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/776.diff",
"html_url": "https://github.com/huggingface/datasets/pull/776",
"merged_at": "2020-10-30T13:23:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/776.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/776"
}
|
The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/776/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3232
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3232/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3232/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3232/events
|
https://github.com/huggingface/datasets/issues/3232
| 1,047,361,573
|
I_kwDODunzps4-bXgl
| 3,232
|
The Xsum datasets seems not able to download.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37999885?v=4",
"events_url": "https://api.github.com/users/FYYFU/events{/privacy}",
"followers_url": "https://api.github.com/users/FYYFU/followers",
"following_url": "https://api.github.com/users/FYYFU/following{/other_user}",
"gists_url": "https://api.github.com/users/FYYFU/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FYYFU",
"id": 37999885,
"login": "FYYFU",
"node_id": "MDQ6VXNlcjM3OTk5ODg1",
"organizations_url": "https://api.github.com/users/FYYFU/orgs",
"received_events_url": "https://api.github.com/users/FYYFU/received_events",
"repos_url": "https://api.github.com/users/FYYFU/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FYYFU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FYYFU/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FYYFU"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! On my side the URL is working fine, could you try again ?",
"> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>",
"I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.",
"> I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.\r\n\r\n:> ok. Thanks for your reply."
] | 2021-11-08T11:58:54Z
| 2021-11-09T15:07:16Z
| 2021-11-09T15:07:16Z
|
NONE
| null | null | null |
## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3232/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3232/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/788
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/788/comments
|
https://api.github.com/repos/huggingface/datasets/issues/788/events
|
https://github.com/huggingface/datasets/issues/788
| 734,136,124
|
MDU6SXNzdWU3MzQxMzYxMjQ=
| 788
|
failed to reuse cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4",
"events_url": "https://api.github.com/users/WangHexie/events{/privacy}",
"followers_url": "https://api.github.com/users/WangHexie/followers",
"following_url": "https://api.github.com/users/WangHexie/following{/other_user}",
"gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WangHexie",
"id": 31768052,
"login": "WangHexie",
"node_id": "MDQ6VXNlcjMxNzY4MDUy",
"organizations_url": "https://api.github.com/users/WangHexie/orgs",
"received_events_url": "https://api.github.com/users/WangHexie/received_events",
"repos_url": "https://api.github.com/users/WangHexie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WangHexie"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-11-02T02:42:36Z
| 2020-11-02T12:26:15Z
| 2020-11-02T12:26:15Z
|
NONE
| null | null | null |
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/788/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4603
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4603/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4603/events
|
https://github.com/huggingface/datasets/issues/4603
| 1,289,963,331
|
I_kwDODunzps5M40dD
| 4,603
|
CI fails recurrently and randomly on Windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-06-30T10:59:58Z
| 2022-06-30T13:22:25Z
| 2022-06-30T13:22:25Z
|
MEMBER
| null | null | null |
As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4603/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.