url stringlengths 61 61 | repository_url stringclasses 1 value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 758M 1.95B | node_id stringlengths 18 32 | number int64 1.2k 6.31k | title stringlengths 1 290 | user dict | labels listlengths 0 3 | state stringclasses 2 values | locked bool 1 class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3 values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 36.2k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app float64 | state_reason stringclasses 3 values | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3394/comments | https://api.github.com/repos/huggingface/datasets/issues/3394/events | https://github.com/huggingface/datasets/issues/3394 | 1,073,396,308 | I_kwDODunzps4_-rpU | 3,394 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded !",
"Maybe we can also fix https://github.com/huggingface/datasets/iss... | 2021-12-07T14:08:30Z | 2021-12-21T17:00:09Z | 2021-12-21T17:00:09Z | CONTRIBUTOR | null | null | null | Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file). | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3394/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5298/comments | https://api.github.com/repos/huggingface/datasets/issues/5298/events | https://github.com/huggingface/datasets/issues/5298 | 1,464,681,871 | I_kwDODunzps5XTUWP | 5,298 | Bug in xopen with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2022-11-25T15:21:32Z | 2022-11-29T08:21:25Z | 2022-11-29T08:21:25Z | MEMBER | null | null | null | Currently, `xopen` function has a bug with local Windows pathnames:
From its implementation:
```python
def xopen(file: str, mode="r", *args, **kwargs):
file = _as_posix(PurePath(file))
main_hop, *rest_hops = file.split("::")
if is_local_path(main_hop):
return open(file, mode, *args, **kwargs)
```
On a Windows machine, if we pass the argument:
```python
xopen("C:\\Users\\USERNAME\\filename.txt")
```
it returns
```python
open("C:/Users/USERNAME/filename.txt")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5298/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2361/comments | https://api.github.com/repos/huggingface/datasets/issues/2361/events | https://github.com/huggingface/datasets/pull/2361 | 891,982,808 | MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy... | 2021-05-14T14:45:23Z | 2021-08-17T08:30:04Z | 2021-08-17T08:30:04Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2361.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2361",
"merged_at": "2021-08-17T08:30:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2361.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2361"
} | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2361/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2679/comments | https://api.github.com/repos/huggingface/datasets/issues/2679/events | https://github.com/huggingface/datasets/issues/2679 | 948,506,638 | MDU6SXNzdWU5NDg1MDY2Mzg= | 2,679 | Cannot load the blog_authorship_corpus due to codec errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/38069449?v=4",
"events_url": "https://api.github.com/users/izaskr/events{/privacy}",
"followers_url": "https://api.github.com/users/izaskr/followers",
"following_url": "https://api.github.com/users/izaskr/following{/other_user}",
"gists_url": "https://api.github.com/users/izaskr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/izaskr",
"id": 38069449,
"login": "izaskr",
"node_id": "MDQ6VXNlcjM4MDY5NDQ5",
"organizations_url": "https://api.github.com/users/izaskr/orgs",
"received_events_url": "https://api.github.com/users/izaskr/received_events",
"repos_url": "https://api.github.com/users/izaskr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/izaskr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izaskr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/izaskr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...",
"Hi @izaskr, thanks... | 2021-07-20T10:13:20Z | 2021-07-21T17:02:21Z | 2021-07-21T13:11:58Z | NONE | null | null | null | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error similar to the one below was raised for (what seems like) every XML file.
/home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset
builder_instance.download_and_prepare(
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare
self._download_and_prepare(
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2679/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5686/comments | https://api.github.com/repos/huggingface/datasets/issues/5686/events | https://github.com/huggingface/datasets/pull/5686 | 1,646,308,228 | PR_kwDODunzps5NMXdu | 5,686 | set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-03-29T18:24:13Z | 2023-03-29T18:33:49Z | 2023-03-29T18:24:22Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5686.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5686",
"merged_at": "2023-03-29T18:24:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5686.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5686"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5686/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1674/comments | https://api.github.com/repos/huggingface/datasets/issues/1674/events | https://github.com/huggingface/datasets/issues/1674 | 777,321,840 | MDU6SXNzdWU3NzczMjE4NDA= | 1,674 | dutch_social can't be loaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/10134844?v=4",
"events_url": "https://api.github.com/users/koenvandenberge/events{/privacy}",
"followers_url": "https://api.github.com/users/koenvandenberge/followers",
"following_url": "https://api.github.com/users/koenvandenberge/following{/other_user}",
"gists_url": "https://api.github.com/users/koenvandenberge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/koenvandenberge",
"id": 10134844,
"login": "koenvandenberge",
"node_id": "MDQ6VXNlcjEwMTM0ODQ0",
"organizations_url": "https://api.github.com/users/koenvandenberge/orgs",
"received_events_url": "https://api.github.com/users/koenvandenberge/received_events",
"repos_url": "https://api.github.com/users/koenvandenberge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/koenvandenberge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koenvandenberge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/koenvandenberge"
} | [] | closed | false | null | [] | null | [
"exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n",
"Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the... | 2021-01-01T17:37:08Z | 2022-10-05T13:03:26Z | 2022-10-05T13:03:26Z | NONE | null | null | null | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koenvandenberge$ python
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
dataset = load_dataset(
'dutch_social')
>>> dataset = load_dataset(
... 'dutch_social')
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at dutch_social/dutch_social.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1674/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3626/comments | https://api.github.com/repos/huggingface/datasets/issues/3626/events | https://github.com/huggingface/datasets/issues/3626 | 1,113,534,436 | I_kwDODunzps5CXy_k | 3,626 | The Pile cannot connect to host | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2022-01-25T07:43:33Z | 2022-02-14T08:40:58Z | 2022-02-14T08:40:58Z | MEMBER | null | null | null | ## Describe the bug
The Pile had issues with their previous host server and have mirrored its content to another server.
The new URL server should be updated.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3626/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3280/comments | https://api.github.com/repos/huggingface/datasets/issues/3280/events | https://github.com/huggingface/datasets/pull/3280 | 1,054,766,828 | PR_kwDODunzps4ulgye | 3,280 | Fix bookcorpusopen RAM usage | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-11-16T11:27:52Z | 2021-11-17T15:53:28Z | 2021-11-16T13:34:30Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3280.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3280",
"merged_at": "2021-11-16T13:34:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3280.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3280"
} | Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory
Fix #3167. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3280/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6302/comments | https://api.github.com/repos/huggingface/datasets/issues/6302/events | https://github.com/huggingface/datasets/issues/6302 | 1,942,096,078 | I_kwDODunzps5zwgjO | 6,302 | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | {
"avatar_url": "https://avatars.githubusercontent.com/u/2855550?v=4",
"events_url": "https://api.github.com/users/Rassibassi/events{/privacy}",
"followers_url": "https://api.github.com/users/Rassibassi/followers",
"following_url": "https://api.github.com/users/Rassibassi/following{/other_user}",
"gists_url": "https://api.github.com/users/Rassibassi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rassibassi",
"id": 2855550,
"login": "Rassibassi",
"node_id": "MDQ6VXNlcjI4NTU1NTA=",
"organizations_url": "https://api.github.com/users/Rassibassi/orgs",
"received_events_url": "https://api.github.com/users/Rassibassi/received_events",
"repos_url": "https://api.github.com/users/Rassibassi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rassibassi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rassibassi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rassibassi"
} | [] | closed | false | null | [] | null | [
"`writer._num_bytes` is updated every `writer_batch_size`-th call to the `write` method (default `writer_batch_size` is 1000 (examples)). You should be able to see the update by passing a smaller `writer_batch_size` to the `load_dataset_builder`.\r\n\r\nWe could improve this by supporting the string `writer_batch_s... | 2023-10-13T14:43:36Z | 2023-10-17T06:52:12Z | 2023-10-17T06:52:11Z | NONE | null | null | null | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason for this is that, in line [2] `writer._num_bytes > max_shard_size` is never true, because the `write` method of `ArrowWriter` [3] does not increase `self._num_bytes`.
Such that respective Arrow/Parquet shards are only written to file based on the `writer_batch_size` or `config.DEFAULT_MAX_BATCH_SIZE`, but not based on `max_shard_size`.
[1] https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage
[2] https://github.com/huggingface/datasets/blob/3e8d420808718c9a1453a2e7ee3484ca12c9c70d/src/datasets/builder.py#L1677
[3] https://github.com/huggingface/datasets/blob/3e8d420808718c9a1453a2e7ee3484ca12c9c70d/src/datasets/arrow_writer.py#L459
### Steps to reproduce the bug
Get example from: https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage
Call `builder.download_and_prepare` with low `max_shard_size` such as `10MB`, e.g.:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
### Expected behavior
Shards should be written based on `max_shard_size` instead of batch size.
### Environment info
```
>>> import datasets
>>> datasets.__version__
'2.14.6.dev0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6302/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5558/comments | https://api.github.com/repos/huggingface/datasets/issues/5558/events | https://github.com/huggingface/datasets/pull/5558 | 1,593,655,815 | PR_kwDODunzps5KcF5E | 5,558 | Remove instructions for `ffmpeg` system package installation on Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T15:13:36Z | 2023-03-01T13:46:04Z | 2023-02-23T13:50:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5558",
"merged_at": "2023-02-23T13:50:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5558"
} | Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5558/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4218/comments | https://api.github.com/repos/huggingface/datasets/issues/4218/events | https://github.com/huggingface/datasets/pull/4218 | 1,214,748,226 | PR_kwDODunzps42vTA0 | 4,218 | Make code for image downloading from image urls cacheable | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T16:17:59Z | 2022-04-26T17:00:24Z | 2022-04-26T13:38:26Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4218",
"merged_at": "2022-04-26T13:38:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4218"
} | Fix #4199 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4218/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1213/comments | https://api.github.com/repos/huggingface/datasets/issues/1213/events | https://github.com/huggingface/datasets/pull/1213 | 757,983,884 | MDExOlB1bGxSZXF1ZXN0NTMzMjM4NzEz | 1,213 | add taskmaster3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
} | [] | closed | false | null | [] | null | [
"(you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')",
"> (you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')\r\n\r\nOops :(\r\n\r\nThanks for the suggestion, will ... | 2020-12-06T17:56:03Z | 2020-12-09T11:05:10Z | 2020-12-09T11:00:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1213",
"merged_at": "2020-12-09T11:00:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1213"
} | Adding Taskmaster-3 dataset
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020.
The dataset structure almost same as original dataset with these two changes
1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex.
```python
args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"}
```
becomes
```python
[
{
"arg_name": "name.movie",
"arg_value": "Mulan"
},
{
"arg_name": "name.theater",
"arg_value": "Mountain AMC 16"
}
]
```
2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2016/comments | https://api.github.com/repos/huggingface/datasets/issues/2016/events | https://github.com/huggingface/datasets/pull/2016 | 825,965,493 | MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz | 2,016 | Not all languages have 2 digit codes. | {
"avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4",
"events_url": "https://api.github.com/users/asiddhant/events{/privacy}",
"followers_url": "https://api.github.com/users/asiddhant/followers",
"following_url": "https://api.github.com/users/asiddhant/following{/other_user}",
"gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asiddhant",
"id": 13891775,
"login": "asiddhant",
"node_id": "MDQ6VXNlcjEzODkxNzc1",
"organizations_url": "https://api.github.com/users/asiddhant/orgs",
"received_events_url": "https://api.github.com/users/asiddhant/received_events",
"repos_url": "https://api.github.com/users/asiddhant/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asiddhant"
} | [] | closed | false | null | [] | null | [] | 2021-03-09T13:53:39Z | 2021-03-11T18:01:03Z | 2021-03-11T18:01:03Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2016",
"merged_at": "2021-03-11T18:01:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2016"
} | . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2016/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6296/comments | https://api.github.com/repos/huggingface/datasets/issues/6296/events | https://github.com/huggingface/datasets/pull/6296 | 1,938,453,845 | PR_kwDODunzps5cjUs1 | 6,296 | Move `exceptions.py` to `utils/exceptions.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-10-11T18:28:00Z | 2023-10-17T13:25:33Z | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6296.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6296",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6296.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6296"
} | I didn't notice the path while reviewing the PR yesterday :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6296/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2236/comments | https://api.github.com/repos/huggingface/datasets/issues/2236/events | https://github.com/huggingface/datasets/issues/2236 | 861,388,145 | MDU6SXNzdWU4NjEzODgxNDU= | 2,236 | Request to add StrategyQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [] | 2021-04-19T14:46:26Z | 2021-04-19T14:46:26Z | null | NONE | null | null | null | ## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2236/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6300/comments | https://api.github.com/repos/huggingface/datasets/issues/6300/events | https://github.com/huggingface/datasets/pull/6300 | 1,940,153,432 | PR_kwDODunzps5cpIoG | 6,300 | Unpin `jax` maximum version | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-10-12T14:42:40Z | 2023-10-12T16:37:55Z | 2023-10-12T16:28:57Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6300",
"merged_at": "2023-10-12T16:28:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6300"
} | fix #6299
fix #6202 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6300/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3794/comments | https://api.github.com/repos/huggingface/datasets/issues/3794/events | https://github.com/huggingface/datasets/pull/3794 | 1,153,185,343 | PR_kwDODunzps4zniT4 | 3,794 | Add Mahalanobis distance metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4",
"events_url": "https://api.github.com/users/JoaoLages/events{/privacy}",
"followers_url": "https://api.github.com/users/JoaoLages/followers",
"following_url": "https://api.github.com/users/JoaoLages/following{/other_user}",
"gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoaoLages",
"id": 17574157,
"login": "JoaoLages",
"node_id": "MDQ6VXNlcjE3NTc0MTU3",
"organizations_url": "https://api.github.com/users/JoaoLages/orgs",
"received_events_url": "https://api.github.com/users/JoaoLages/received_events",
"repos_url": "https://api.github.com/users/JoaoLages/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoaoLages"
} | [] | closed | false | null | [] | null | [] | 2022-02-27T10:56:31Z | 2022-03-02T14:46:15Z | 2022-03-02T14:46:15Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3794",
"merged_at": "2022-03-02T14:46:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3794"
} | Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P.
In this PR I implement the metric in a simple way with the help of numpy only.
Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3794/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5209/comments | https://api.github.com/repos/huggingface/datasets/issues/5209/events | https://github.com/huggingface/datasets/issues/5209 | 1,438,367,678 | I_kwDODunzps5Vu7-- | 5,209 | Implement ability to define splits in metadata section of dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"@merveenoyan Do you want different files to be splits or configurations?\r\n\r\nFrom [what you specified in `Readme.md`](https://huggingface.co/datasets/inria-soda/tabular-benchmark/commit/fb4575853772c62a20203bdd6cc0202f5db4ce4e) I hypothesize that you want to have 4 **configs** corresponding to directories: `\"c... | 2022-11-07T13:27:16Z | 2023-07-21T14:36:02Z | 2023-07-21T14:36:01Z | CONTRIBUTOR | null | null | null | ### Feature request
If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`)
e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead.
Also pinging @polinaeterna @lhoestq @adrinjalali
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5209/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3433/comments | https://api.github.com/repos/huggingface/datasets/issues/3433/events | https://github.com/huggingface/datasets/issues/3433 | 1,080,910,724 | I_kwDODunzps5AbWOE | 3,433 | Add Multilingual Spoken Words dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [] | 2021-12-15T11:14:44Z | 2022-02-22T10:03:53Z | 2022-02-22T10:03:53Z | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Multilingual Spoken Words
- **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours).
Read more: https://mlcommons.org/en/news/spoken-words-blog/
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Data:** https://mlcommons.org/en/multilingual-spoken-words/
- **Motivation:**
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3433/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1486/comments | https://api.github.com/repos/huggingface/datasets/issues/1486/events | https://github.com/huggingface/datasets/pull/1486 | 762,790,102 | MDExOlB1bGxSZXF1ZXN0NTM3MzAxODY2 | 1,486 | hate speech 18 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4",
"events_url": "https://api.github.com/users/czabo/events{/privacy}",
"followers_url": "https://api.github.com/users/czabo/followers",
"following_url": "https://api.github.com/users/czabo/following{/other_user}",
"gists_url": "https://api.github.com/users/czabo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/czabo",
"id": 75574105,
"login": "czabo",
"node_id": "MDQ6VXNlcjc1NTc0MTA1",
"organizations_url": "https://api.github.com/users/czabo/orgs",
"received_events_url": "https://api.github.com/users/czabo/received_events",
"repos_url": "https://api.github.com/users/czabo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czabo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/czabo"
} | [] | closed | false | null | [] | null | [
"The error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one",
"It's fixed on master now :) \r\n\r\nmerging this once"
] | 2020-12-11T19:22:14Z | 2020-12-14T19:43:18Z | 2020-12-14T19:43:18Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1486.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1486",
"merged_at": "2020-12-14T19:43:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1486.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1486"
} | This is again a PR instead of #1339, because something went wrong there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1486/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1660/comments | https://api.github.com/repos/huggingface/datasets/issues/1660/events | https://github.com/huggingface/datasets/pull/1660 | 775,831,423 | MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1 | 1,660 | add dataset info | {
"avatar_url": "https://avatars.githubusercontent.com/u/24206326?v=4",
"events_url": "https://api.github.com/users/harshalmittal4/events{/privacy}",
"followers_url": "https://api.github.com/users/harshalmittal4/followers",
"following_url": "https://api.github.com/users/harshalmittal4/following{/other_user}",
"gists_url": "https://api.github.com/users/harshalmittal4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harshalmittal4",
"id": 24206326,
"login": "harshalmittal4",
"node_id": "MDQ6VXNlcjI0MjA2MzI2",
"organizations_url": "https://api.github.com/users/harshalmittal4/orgs",
"received_events_url": "https://api.github.com/users/harshalmittal4/received_events",
"repos_url": "https://api.github.com/users/harshalmittal4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harshalmittal4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harshalmittal4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harshalmittal4"
} | [] | closed | false | null | [] | null | [] | 2020-12-29T10:58:19Z | 2020-12-30T17:04:30Z | 2020-12-30T17:04:30Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1660",
"merged_at": "2020-12-30T17:04:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1660"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1660/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/3358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3358/comments | https://api.github.com/repos/huggingface/datasets/issues/3358/events | https://github.com/huggingface/datasets/issues/3358 | 1,068,623,216 | I_kwDODunzps4_seVw | 3,358 | add new field, and get errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4",
"events_url": "https://api.github.com/users/PatricYan/events{/privacy}",
"followers_url": "https://api.github.com/users/PatricYan/followers",
"following_url": "https://api.github.com/users/PatricYan/following{/other_user}",
"gists_url": "https://api.github.com/users/PatricYan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PatricYan",
"id": 38966558,
"login": "PatricYan",
"node_id": "MDQ6VXNlcjM4OTY2NTU4",
"organizations_url": "https://api.github.com/users/PatricYan/orgs",
"received_events_url": "https://api.github.com/users/PatricYan/received_events",
"repos_url": "https://api.github.com/users/PatricYan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PatricYan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatricYan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PatricYan"
} | [] | closed | false | null | [] | null | [
"Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ",
"> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok."
] | 2021-12-01T16:35:38Z | 2021-12-02T02:26:22Z | 2021-12-02T02:26:22Z | NONE | null | null | null | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'],
num_rows: 87714
})
```
**Errors**
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3358/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4656/comments | https://api.github.com/repos/huggingface/datasets/issues/4656/events | https://github.com/huggingface/datasets/issues/4656 | 1,296,740,266 | I_kwDODunzps5NSq-q | 4,656 | Add Amazon-QA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)."
] | 2022-07-07T03:15:11Z | 2022-07-14T02:20:12Z | 2022-07-14T02:20:12Z | NONE | null | null | null | ## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa/tree/master/paper*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4656/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1956/comments | https://api.github.com/repos/huggingface/datasets/issues/1956/events | https://github.com/huggingface/datasets/issues/1956 | 818,013,741 | MDU6SXNzdWU4MTgwMTM3NDE= | 1,956 | [distributed env] potentially unsafe parallel execution | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [
"You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.\r\nMaybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?",
"Ah, you're absolutely correct, @lhoestq - it's exactly the equiva... | 2021-02-27T20:38:45Z | 2021-03-01T17:24:42Z | 2021-03-01T17:24:42Z | CONTRIBUTOR | null | null | null | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason).
That's why dist environments use some unique to a group identifier so that each group is dealt with separately.
e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT`
So ideally this interface should ask for a shared secret to do the right thing.
I'm not reporting an immediate need, but am only flagging that this will hit someone down the road.
This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment.
Thank you | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1956/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3167/comments | https://api.github.com/repos/huggingface/datasets/issues/3167/events | https://github.com/huggingface/datasets/issues/3167 | 1,036,488,992 | I_kwDODunzps49x5Eg | 3,167 | bookcorpusopen no longer works | {
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucadiliello",
"id": 23355969,
"login": "lucadiliello",
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucadiliello"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"Hi ! Thanks for reporting :) I think #3280 should fix this",
"I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```",
"Fi... | 2021-10-26T16:06:15Z | 2021-11-17T15:53:46Z | 2021-11-17T15:53:46Z | CONTRIBUTOR | null | null | null | ## Describe the bug
When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...).
This did not happen with 1.4.1.
I tried also `rm -rf ~/.cache/huggingface` but did not help.
Changing python version between 3.7, 3.8 and 3.9 did not help too.
## Steps to reproduce the bug
```python
import datasets
d = datasets.load_dataset('bookcorpusopen')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3167/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5769/comments | https://api.github.com/repos/huggingface/datasets/issues/5769/events | https://github.com/huggingface/datasets/issues/5769 | 1,673,441,182 | I_kwDODunzps5jvq-e | 5,769 | Tiktoken tokenizers are not pickable | {
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/markovalexander",
"id": 22663468,
"login": "markovalexander",
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"type": "User",
"url": "https://api.github.com/users/markovalexander"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure y... | 2023-04-18T16:07:40Z | 2023-05-04T18:55:57Z | 2023-05-04T18:55:57Z | NONE | null | null | null | ### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5769/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2359/comments | https://api.github.com/repos/huggingface/datasets/issues/2359/events | https://github.com/huggingface/datasets/issues/2359 | 891,946,017 | MDU6SXNzdWU4OTE5NDYwMTc= | 2,359 | Allow model labels to be passed during task preparation | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"We now have the `align_labels_with_mapping` method in the API for this purpose."
] | 2021-05-14T13:58:28Z | 2022-10-05T17:37:22Z | 2022-10-05T17:37:22Z | MEMBER | null | null | null | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:
- "1 star", "2 stars", "3 stars", "4 stars", "5 stars"
- "1", "2", "3", "4", "5"
Some models may use the first set, while other models use the second set.
Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?
Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that.
The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.
Let me know what you think ! This can be done in a future PR
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2359/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3867/comments | https://api.github.com/repos/huggingface/datasets/issues/3867/events | https://github.com/huggingface/datasets/pull/3867 | 1,162,896,605 | PR_kwDODunzps40Hjrk | 3,867 | Update for the rename doc-builder -> hf-doc-utils | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [
"why utils? it's a builder no?",
"~~@julien-c there was a vote 🙂 https://huggingface.slack.com/archives/C021H1P1HKR/p1646405136644739~~\r\n\r\noh I see you already commeented in the thread as well",
"Thanks ! It looks all good to me (provided `hf-doc-utils` is the name we keep in the end). I'm fine with this n... | 2022-03-08T16:58:25Z | 2023-09-24T09:54:44Z | 2022-03-08T17:30:45Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3867.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3867",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3867.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3867"
} | This PR adapts the job to the upcoming change of name of `doc-builder`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3867/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4895/comments | https://api.github.com/repos/huggingface/datasets/issues/4895/events | https://github.com/huggingface/datasets/issues/4895 | 1,350,798,527 | I_kwDODunzps5Qg4y_ | 4,895 | load_dataset method returns Unknown split "validation" even if this dir exists | {
"avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4",
"events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}",
"followers_url": "https://api.github.com/users/SamSamhuns/followers",
"following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SamSamhuns",
"id": 13418507,
"login": "SamSamhuns",
"node_id": "MDQ6VXNlcjEzNDE4NTA3",
"organizations_url": "https://api.github.com/users/SamSamhuns/orgs",
"received_events_url": "https://api.github.com/users/SamSamhuns/received_events",
"repos_url": "https://api.github.com/users/SamSamhuns/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SamSamhuns"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with t... | 2022-08-25T12:11:00Z | 2022-10-06T17:49:28Z | 2022-09-29T08:07:50Z | NONE | null | null | null | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4895/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/events | https://github.com/huggingface/datasets/pull/1903 | 811,145,531 | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | 1,903 | Initial commit for the addition of TIMIT dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrindaprabhu",
"id": 16264631,
"login": "vrindaprabhu",
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"organizations_url": "https://api.github.com/users/vrindaprabhu/orgs",
"received_events_url": "https://api.github.com/users/vrindaprabhu/received_events",
"repos_url": "https://api.github.com/users/vrindaprabhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrindaprabhu"
} | [] | closed | false | null | [] | null | [
"@patrickvonplaten could you please review and help me close this PR?",
"@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my sid... | 2021-02-18T14:23:12Z | 2021-03-01T09:39:12Z | 2021-03-01T09:39:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"merged_at": "2021-03-01T09:39:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1903"
} | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten Requesting your comments, will be happy to address them! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1234/comments | https://api.github.com/repos/huggingface/datasets/issues/1234/events | https://github.com/huggingface/datasets/pull/1234 | 758,229,304 | MDExOlB1bGxSZXF1ZXN0NTMzNDM0ODkz | 1,234 | Added ade_corpus_v2, with 3 configs for relation extraction and classification task | {
"avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4",
"events_url": "https://api.github.com/users/Nilanshrajput/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilanshrajput/followers",
"following_url": "https://api.github.com/users/Nilanshrajput/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilanshrajput/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilanshrajput",
"id": 28673745,
"login": "Nilanshrajput",
"node_id": "MDQ6VXNlcjI4NjczNzQ1",
"organizations_url": "https://api.github.com/users/Nilanshrajput/orgs",
"received_events_url": "https://api.github.com/users/Nilanshrajput/received_events",
"repos_url": "https://api.github.com/users/Nilanshrajput/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilanshrajput/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilanshrajput/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilanshrajput"
} | [] | closed | false | null | [] | null | [
"@lhoestq I have added the tags they are in separate files for 3 different configs",
"@lhoestq thanks for the review I added your suggested changes.",
"merging since the CI is fixed on master"
] | 2020-12-07T07:05:14Z | 2020-12-14T17:49:14Z | 2020-12-14T17:49:14Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1234",
"merged_at": "2020-12-14T17:49:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1234"
} | Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1234/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4135/comments | https://api.github.com/repos/huggingface/datasets/issues/4135/events | https://github.com/huggingface/datasets/pull/4135 | 1,198,307,610 | PR_kwDODunzps416-Rn | 4,135 | Support streaming xtreme dataset for PAN-X config | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-09T06:19:48Z | 2022-05-06T08:39:40Z | 2022-04-11T06:54:14Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4135.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4135",
"merged_at": "2022-04-11T06:54:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4135.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4135"
} | Support streaming xtreme dataset for PAN-X config. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4135/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2917/comments | https://api.github.com/repos/huggingface/datasets/issues/2917/events | https://github.com/huggingface/datasets/issues/2917 | 997,041,658 | I_kwDODunzps47baX6 | 2,917 | windows download abnormal | {
"avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4",
"events_url": "https://api.github.com/users/wei1826676931/events{/privacy}",
"followers_url": "https://api.github.com/users/wei1826676931/followers",
"following_url": "https://api.github.com/users/wei1826676931/following{/other_user}",
"gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wei1826676931",
"id": 52347799,
"login": "wei1826676931",
"node_id": "MDQ6VXNlcjUyMzQ3Nzk5",
"organizations_url": "https://api.github.com/users/wei1826676931/orgs",
"received_events_url": "https://api.github.com/users/wei1826676931/received_events",
"repos_url": "https://api.github.com/users/wei1826676931/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wei1826676931"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used",
"It is indeed an agency problem, thank you very, very much",
"Let me know if you have other questions :)\... | 2021-09-15T12:45:35Z | 2021-09-16T17:17:48Z | 2021-09-16T17:17:48Z | NONE | null | null | null | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows

# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2917/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3500/comments | https://api.github.com/repos/huggingface/datasets/issues/3500/events | https://github.com/huggingface/datasets/pull/3500 | 1,090,406,133 | PR_kwDODunzps4wXLTB | 3,500 | Docs: Add VCTK dataset description | {
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaketae",
"id": 25360440,
"login": "jaketae",
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"repos_url": "https://api.github.com/users/jaketae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaketae"
} | [] | closed | false | null | [] | null | [] | 2021-12-29T10:02:05Z | 2022-01-04T10:46:02Z | 2022-01-04T10:25:09Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3500",
"merged_at": "2022-01-04T10:25:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3500"
} | This PR is a very minor followup to #1837, with only docs changes (single comment string). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3500/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5428/comments | https://api.github.com/repos/huggingface/datasets/issues/5428/events | https://github.com/huggingface/datasets/issues/5428 | 1,535,166,139 | I_kwDODunzps5bgMa7 | 5,428 | Load/Save FAISS index using fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a gr... | 2023-01-16T16:08:12Z | 2023-03-27T15:18:22Z | 2023-03-27T15:18:22Z | CONTRIBUTOR | null | null | null | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In my case, I'm saving faiss index in cloud storage and use `fsspec` to load them. It would be ideal if I could send the stream directly instead of copying the file locally (or mounting the bucket) and then load the index.
### Your contribution
I can submit the PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5428/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6132/comments | https://api.github.com/repos/huggingface/datasets/issues/6132/events | https://github.com/huggingface/datasets/issues/6132 | 1,843,491,020 | I_kwDODunzps5t4XDM | 6,132 | to_iterable_dataset is missing in document | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo"
} | [] | closed | false | null | [] | null | [
"Fixed with PR"
] | 2023-08-09T15:15:03Z | 2023-08-16T04:43:36Z | 2023-08-16T04:43:29Z | CONTRIBUTOR | null | null | null | ### Describe the bug
to_iterable_dataset is missing in document
### Steps to reproduce the bug
to_iterable_dataset is missing in document
### Expected behavior
document enhancement
### Environment info
unrelated | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6132/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1521/comments | https://api.github.com/repos/huggingface/datasets/issues/1521/events | https://github.com/huggingface/datasets/pull/1521 | 764,320,841 | MDExOlB1bGxSZXF1ZXN0NTM4NDQzOTgz | 1,521 | Atomic | {
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ontocord",
"id": 8900094,
"login": "ontocord",
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"repos_url": "https://api.github.com/users/ontocord/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ontocord"
} | [] | closed | false | null | [] | null | [
"I had to create a new PR to fix git errors. See: https://github.com/huggingface/datasets/pull/1525\r\n\r\nI'm closing this PR. "
] | 2020-12-12T20:18:08Z | 2020-12-12T22:56:48Z | 2020-12-12T22:56:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1521",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1521"
} | This is the ATOMIC common sense dataset. More info can be found here:
* README.md still to be created. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1521/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5991/comments | https://api.github.com/repos/huggingface/datasets/issues/5991/events | https://github.com/huggingface/datasets/issues/5991 | 1,774,456,518 | I_kwDODunzps5pxA7G | 5,991 | `map` with any joblib backend | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2023-06-26T10:33:42Z | 2023-06-26T10:33:42Z | null | MEMBER | null | null | null | We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.
If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !
Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5991/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2326/comments | https://api.github.com/repos/huggingface/datasets/issues/2326/events | https://github.com/huggingface/datasets/pull/2326 | 876,829,254 | MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2021-05-05T20:58:38Z | 2021-05-07T08:41:10Z | 2021-05-07T08:41:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"merged_at": "2021-05-07T08:41:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326"
} | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2326/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2407/comments | https://api.github.com/repos/huggingface/datasets/issues/2407/events | https://github.com/huggingface/datasets/issues/2407 | 903,111,755 | MDU6SXNzdWU5MDMxMTE3NTU= | 2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | {
"avatar_url": "https://avatars.githubusercontent.com/u/7390482?v=4",
"events_url": "https://api.github.com/users/cindyxinyiwang/events{/privacy}",
"followers_url": "https://api.github.com/users/cindyxinyiwang/followers",
"following_url": "https://api.github.com/users/cindyxinyiwang/following{/other_user}",
"gists_url": "https://api.github.com/users/cindyxinyiwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cindyxinyiwang",
"id": 7390482,
"login": "cindyxinyiwang",
"node_id": "MDQ6VXNlcjczOTA0ODI=",
"organizations_url": "https://api.github.com/users/cindyxinyiwang/orgs",
"received_events_url": "https://api.github.com/users/cindyxinyiwang/received_events",
"repos_url": "https://api.github.com/users/cindyxinyiwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cindyxinyiwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cindyxinyiwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cindyxinyiwang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558",
"Hi ! `cache_file_nam... | 2021-05-27T01:54:26Z | 2021-05-27T13:46:40Z | 2021-05-27T13:46:40Z | NONE | null | null | null | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2407/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3568/comments | https://api.github.com/repos/huggingface/datasets/issues/3568/events | https://github.com/huggingface/datasets/issues/3568 | 1,100,380,631 | I_kwDODunzps5BlnnX | 3,568 | Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError | {
"avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4",
"events_url": "https://api.github.com/users/fabianslife/events{/privacy}",
"followers_url": "https://api.github.com/users/fabianslife/followers",
"following_url": "https://api.github.com/users/fabianslife/following{/other_user}",
"gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fabianslife",
"id": 49265757,
"login": "fabianslife",
"node_id": "MDQ6VXNlcjQ5MjY1NzU3",
"organizations_url": "https://api.github.com/users/fabianslife/orgs",
"received_events_url": "https://api.github.com/users/fabianslife/received_events",
"repos_url": "https://api.github.com/users/fabianslife/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fabianslife"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -... | 2022-01-12T14:03:44Z | 2022-02-14T09:32:34Z | 2022-02-14T09:32:34Z | NONE | null | null | null | I wanted to download the Nedical Dialog Dataset from huggingface, using this github link:
https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog
After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is:
```
import copy
import os
import re
import datasets
_CITATION = """\
@article{chen2020meddiag,
title={MedDialog: a large-scale medical dialogue dataset},
author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
journal={arXiv preprint arXiv:2004.03329},
year={2020}
}
"""
_DESCRIPTION = """\
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\
It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \
The raw dialogues are from healthcaremagic.com and icliniq.com.\
All copyrights of the data belong to healthcaremagic.com and icliniq.com.
"""
_HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
_LICENSE = ""
class MedicalDialog(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION),
datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION),
]
@property
def manual_download_instructions(self):
return """\
\n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
and manually download the dataset from Google Drive. Once it is completed,
a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
or whichever folder your browser chooses to save files to). Unzip the folder to obtain
a folder named "Medical-Dialogue-Dataset-English" several text files.
Now, you can specify the path to this folder for the data_dir argument in the
datasets.load_dataset(...) option.
The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English".
The data can then be loaded using the below command:\
datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`.
\n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2
**NOTE**
- A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
- After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
"""
datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English")
def _info(self):
if self.config.name == "zh":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["病人", "医生"]),
"utterance": datasets.Value("string"),
}
),
}
)
if self.config.name == "en":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["Patient", "Doctor"]),
"utterance": datasets.Value("string"),
}
),
}
)
return datasets.DatasetInfo(
# This is the description that will appear on the datasets page.
description=_DESCRIPTION,
features=features,
supervised_keys=None,
# Homepage of the dataset for documentation
homepage=_HOMEPAGE,
# License for the dataset if available
license=_LICENSE,
# Citation for the dataset
citation=_CITATION,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
if not os.path.exists(path_to_manual_file):
raise FileNotFoundError(
f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
)
filepaths = [
os.path.join(path_to_manual_file, txt_file_name)
for txt_file_name in sorted(os.listdir(path_to_manual_file))
if txt_file_name.endswith("txt")
]
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
def _generate_examples(self, filepaths):
"""Yields examples. Iterates over each file and give the creates the corresponding features.
NOTE:
- The code makes some assumption on the structure of the raw .txt file.
- There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
"""
data_lang = self.config.name
id_ = -1
for filepath in filepaths:
with open(filepath, encoding="utf-8") as f_in:
# Parameters to just "sectionize" the raw data
last_part = ""
last_dialog = {}
last_list = []
last_user = ""
check_list = []
# These flags are present to have a single function address both chinese and english data
# English data is a little hahazard (i.e. the sentences spans multiple different lines),
# Chinese is compact with one line for doctor and patient.
conv_flag = False
des_flag = False
while True:
line = f_in.readline()
if not line:
break
# Extracting the dialog id
if line[:2] == "id": # Hardcode alert!
# Handling ID references that may come in the description
# These were observed in the Chinese dataset and were not
# followed by numbers
try:
dialogue_id = int(re.findall(r"\d+", line)[0])
except IndexError:
continue
# Extracting the url
if line[:4] == "http": # Hardcode alert!
dialogue_url = line.rstrip()
# Extracting the patient info from description.
if line[:11] == "Description": # Hardcode alert!
last_part = "description"
last_dialog = {}
last_list = []
last_user = ""
last_conv = {"speaker": "", "utterance": ""}
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
break
else:
if data_lang == "zh": # Condition in chinese
if line[:5] == "病情描述:": # Hardcode alert!
last_user = "病人"
sen = f_in.readline().rstrip()
des_flag = True
if data_lang == "en":
last_user = "Patient"
sen = line.rstrip()
des_flag = True
if des_flag:
if sen == "":
continue
if sen in check_list:
last_conv["speaker"] = ""
last_conv["utterance"] = ""
else:
last_conv["speaker"] = last_user
last_conv["utterance"] = sen
check_list.append(sen)
des_flag = False
break
# Extracting the conversation info from dialogue.
elif line[:8] == "Dialogue": # Hardcode alert!
if last_part == "description" and len(last_conv["utterance"]) > 0:
last_part = "dialogue"
if data_lang == "zh":
last_user = "病人"
if data_lang == "en":
last_user = "Patient"
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
conv_flag = False
last_user = ""
last_list.append(copy.deepcopy(last_conv))
# To ensure close of conversation, only even number of sentences
# are extracted
last_turn = len(last_list)
if int(last_turn / 2) > 0:
temp = int(last_turn / 2)
id_ += 1
last_dialog["file_name"] = filepath
last_dialog["dialogue_id"] = dialogue_id
last_dialog["dialogue_url"] = dialogue_url
last_dialog["dialogue_turns"] = last_list[: temp * 2]
yield id_, last_dialog
break
if data_lang == "zh":
if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
user = line[:2] # Hardcode alert!
line = f_in.readline()
conv_flag = True
# The elif block is to ensure that multi-line sentences are captured.
# This has been observed only in english.
if data_lang == "en":
if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
user = line.replace(":", "").rstrip()
line = f_in.readline()
conv_flag = True
elif line[:2] != "id": # Hardcode alert!
conv_flag = True
# Continues till the next ID is parsed
if conv_flag:
sen = line.rstrip()
if sen == "":
continue
if user == last_user:
last_conv["utterance"] = last_conv["utterance"] + sen
else:
last_user = user
last_list.append(copy.deepcopy(last_conv))
last_conv["utterance"] = sen
last_conv["speaker"] = user
```
running this code gives me the error:
```
File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3568/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1567/comments | https://api.github.com/repos/huggingface/datasets/issues/1567/events | https://github.com/huggingface/datasets/pull/1567 | 766,382,609 | MDExOlB1bGxSZXF1ZXN0NTM5NDE3NzI5 | 1,567 | [wording] Update Readme.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-12-14T12:34:52Z | 2020-12-15T12:54:07Z | 2020-12-15T12:54:06Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1567",
"merged_at": "2020-12-15T12:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1567"
} | Make the features of the library clearer. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1567/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4929/comments | https://api.github.com/repos/huggingface/datasets/issues/4929/events | https://github.com/huggingface/datasets/pull/4929 | 1,361,508,366 | PR_kwDODunzps4-WK2w | 4,929 | Fixes a typo in loading documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4",
"events_url": "https://api.github.com/users/sighingnow/events{/privacy}",
"followers_url": "https://api.github.com/users/sighingnow/followers",
"following_url": "https://api.github.com/users/sighingnow/following{/other_user}",
"gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sighingnow",
"id": 7144772,
"login": "sighingnow",
"node_id": "MDQ6VXNlcjcxNDQ3NzI=",
"organizations_url": "https://api.github.com/users/sighingnow/orgs",
"received_events_url": "https://api.github.com/users/sighingnow/received_events",
"repos_url": "https://api.github.com/users/sighingnow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sighingnow"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T07:18:54Z | 2022-09-06T02:11:03Z | 2022-09-05T13:06:38Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"merged_at": "2022-09-05T13:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929"
} | As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4929/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2217/comments | https://api.github.com/repos/huggingface/datasets/issues/2217/events | https://github.com/huggingface/datasets/pull/2217 | 857,011,314 | MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz | 2,217 | Revert breaking change in cache_files property | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-13T14:20:04Z | 2021-04-14T14:24:24Z | 2021-04-14T14:24:23Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"merged_at": "2021-04-14T14:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217"
} | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting the format back to a list of dicts:
```python
[{"filename": "path/to/file.arrow"}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2217/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4696/comments | https://api.github.com/repos/huggingface/datasets/issues/4696/events | https://github.com/huggingface/datasets/issues/4696 | 1,307,183,099 | I_kwDODunzps5N6gf7 | 4,696 | Cannot load LinCE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finiteautomata",
"id": 167943,
"login": "finiteautomata",
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finiteautomata"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @finiteautomata, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"lince\", \"ner_spaeng\")\r\nDownloading builder script: 20.8kB [00:00, 9.09MB/s] ... | 2022-07-17T19:01:54Z | 2022-07-18T09:20:40Z | 2022-07-18T07:24:22Z | NONE | null | null | null | ## Describe the bug
Cannot load LinCE dataset due to a connection error
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lince", "ner_spaeng")
```
A notebook with this code and corresponding error can be found at https://colab.research.google.com/drive/1pgX3bNB9amuUwAVfPFm-XuMV5fEg-cD2
## Expected results
It should load the dataset
## Actual results
```python
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-2-fc551ddcebef> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("lince", "ner_spaeng")
10 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
-> 1684 use_auth_token=use_auth_token,
1685 )
1686
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1219
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1222
1223 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
772
773 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/lince/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589/lince.py in _split_generators(self, dl_manager)
481 def _split_generators(self, dl_manager):
482 """Returns SplitGenerators."""
--> 483 lince_dir = dl_manager.download_and_extract(f"{_LINCE_URL}/{self.config.name}.zip")
484 data_dir = os.path.join(lince_dir, self.config.data_dir)
485 return [
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
432
433 def get_recorded_sizes_checksums(self):
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download(self, url_or_urls)
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
--> 315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
346 # Singleton
347 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 348 return function(data_struct)
349
350 disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled()
/usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
336
337 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
--> 197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
--> 533 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
534 elif response is not None:
535 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))")))
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4696/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3359/comments | https://api.github.com/repos/huggingface/datasets/issues/3359/events | https://github.com/huggingface/datasets/pull/3359 | 1,068,638,213 | PR_kwDODunzps4vQtI0 | 3,359 | Add The Pile Free Law subset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"@albertvillanova Is there a specific reason you’re adding the Pile under “the” instead of under “pile”? That does not appear to be consistent with other datasets.",
"Hi @StellaAthena,\r\n\r\nI asked myself the same question, but at the end I decided to be consistent with previously added Pile subsets:\r\n- #2817... | 2021-12-01T16:46:04Z | 2021-12-06T10:12:17Z | 2021-12-01T17:30:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3359.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3359",
"merged_at": "2021-12-01T17:30:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3359.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3359"
} | Add:
- Free Law subset of The Pile: "free_law" config
Close bigscience-workshop/data_tooling#75.
CC: @StellaAthena | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3359/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3110/comments | https://api.github.com/repos/huggingface/datasets/issues/3110/events | https://github.com/huggingface/datasets/pull/3110 | 1,030,558,484 | PR_kwDODunzps4tZakS | 3,110 | Stream TAR-based dataset using iter_archive | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first",
"The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR"
] | 2021-10-19T17:16:24Z | 2021-11-05T17:48:49Z | 2021-11-05T17:48:48Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3110.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3110",
"merged_at": "2021-11-05T17:48:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3110.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3110"
} | I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable.
It means that around 80 datasets become streamable :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3110/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4941/comments | https://api.github.com/repos/huggingface/datasets/issues/4941/events | https://github.com/huggingface/datasets/pull/4941 | 1,363,622,861 | PR_kwDODunzps4-dQ9F | 4,941 | Add Papers with Code ID to scifact dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T17:46:37Z | 2022-09-06T18:28:17Z | 2022-09-06T18:26:01Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"merged_at": "2022-09-06T18:26:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941"
} | This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4941/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4281/comments | https://api.github.com/repos/huggingface/datasets/issues/4281/events | https://github.com/huggingface/datasets/pull/4281 | 1,225,556,939 | PR_kwDODunzps43TNBm | 4,281 | Remove a copy-paste sentence in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests have nothing to do with this PR."
] | 2022-05-04T15:41:55Z | 2022-05-06T08:38:03Z | 2022-05-04T18:33:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4281.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4281",
"merged_at": "2022-05-04T18:33:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4281.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4281"
} | Remove the following copy-paste sentence from dataset cards:
```
We show detailed information for up to 5 configurations of the dataset.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4281/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4729/comments | https://api.github.com/repos/huggingface/datasets/issues/4729/events | https://github.com/huggingface/datasets/pull/4729 | 1,313,374,015 | PR_kwDODunzps473GmR | 4,729 | Refactor Hub tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-21T14:43:13Z | 2022-07-22T15:09:49Z | 2022-07-22T14:56:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4729.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4729",
"merged_at": "2022-07-22T14:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4729.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4729"
} | This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests fails
This is a preliminary work done to manage unit/integration tests separately. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4729/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3815/comments | https://api.github.com/repos/huggingface/datasets/issues/3815/events | https://github.com/huggingface/datasets/pull/3815 | 1,158,589,512 | PR_kwDODunzps4z5oq- | 3,815 | Fix iter_archive getting reset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-03-03T15:58:52Z | 2022-03-03T18:06:37Z | 2022-03-03T18:06:13Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"merged_at": "2022-03-03T18:06:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3815"
} | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator.
The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3815/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3046/comments | https://api.github.com/repos/huggingface/datasets/issues/3046/events | https://github.com/huggingface/datasets/pull/3046 | 1,021,021,368 | PR_kwDODunzps4s8MjS | 3,046 | Fix MedDialog metadata JSON | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-10-08T12:04:40Z | 2021-10-11T07:46:43Z | 2021-10-11T07:46:42Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3046.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3046",
"merged_at": "2021-10-11T07:46:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3046.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3046"
} | Fix #2969. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3046/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4562/comments | https://api.github.com/repos/huggingface/datasets/issues/4562/events | https://github.com/huggingface/datasets/issues/4562 | 1,283,779,557 | I_kwDODunzps5MhOvl | 4,562 | Dataset Viewer issue for allocine | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n",
"Let me have a look...",
"Thanks for the quick fix @albertvillanova ",
"Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_mana... | 2022-06-24T13:50:38Z | 2022-06-27T06:39:32Z | 2022-06-24T16:44:41Z | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/allocine
### Description
Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:
```
Status code: 400
Exception: AttributeError
Message: 'TarContainedFile' object has no attribute 'readable'
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4562/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4562/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4549/comments | https://api.github.com/repos/huggingface/datasets/issues/4549/events | https://github.com/huggingface/datasets/issues/4549 | 1,282,312,975 | I_kwDODunzps5MbosP | 4,549 | FileNotFoundError when passing a data_file inside a directory starting with double underscores | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`",
"We're working on a fix ;)"
] | 2022-06-23T12:19:24Z | 2022-06-30T14:38:18Z | 2022-06-30T14:38:18Z | MEMBER | null | null | null | Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true
This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4549/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2106/comments | https://api.github.com/repos/huggingface/datasets/issues/2106/events | https://github.com/huggingface/datasets/issues/2106 | 839,084,264 | MDU6SXNzdWU4MzkwODQyNjQ= | 2,106 | WMT19 Dataset for Kazakh-English is not formatted correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4",
"events_url": "https://api.github.com/users/trina731/events{/privacy}",
"followers_url": "https://api.github.com/users/trina731/followers",
"following_url": "https://api.github.com/users/trina731/following{/other_user}",
"gists_url": "https://api.github.com/users/trina731/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trina731",
"id": 22580542,
"login": "trina731",
"node_id": "MDQ6VXNlcjIyNTgwNTQy",
"organizations_url": "https://api.github.com/users/trina731/orgs",
"received_events_url": "https://api.github.com/users/trina731/received_events",
"repos_url": "https://api.github.com/users/trina731/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trina731/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trina731"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is on... | 2021-03-23T20:14:47Z | 2021-03-25T21:36:20Z | null | NONE | null | null | null | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді.
>
> Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды.
>
> Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code
```
import datasets
from datasets import load_dataset
dataset = load_dataset('wmt19', 'kk-en')
for key in dataset['train']['translation']:
if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']:
print(key['en'])
print(key['kk'])
break
```
we get:
> 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
> The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.
which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one.
Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2106/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6282/comments | https://api.github.com/repos/huggingface/datasets/issues/6282/events | https://github.com/huggingface/datasets/pull/6282 | 1,928,473,630 | PR_kwDODunzps5cBT5p | 6,282 | Drop data_files duplicates | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-10-05T14:43:08Z | 2023-10-06T13:02:04Z | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6282",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6282"
} | I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T18:09:20Z | 2021-08-04T18:11:59Z | 2021-04-06T14:33:01Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"merged_at": "2021-04-06T14:33:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122"
} | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).
## Benchmark
Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows):
for the current implementation
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.018ms
Avg access time key=74004227 : 0.215ms
Avg access time key=range(74003204, 74004228) : 1.416ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.187ms
Avg access time key=74004227 : 6.642ms
Avg access time key=range(74003204, 74004228) : 90.941ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms
```
for the new one using interpolation search:
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.076ms
Avg access time key=74004227 : 0.056ms
Avg access time key=range(74003204, 74004228) : 1.807ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.061ms
Avg access time key=74004227 : 0.058ms
Avg access time key=range(74003204, 74004228) : 22.166ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms
```
The RandIter class is just an iterable of 1024 random indices from 0 to 74004228.
Here is also a plot showing the speed improvement depending on the dataset size:

## Implementation details:
- `datasets.table.Table` objects implement interpolation search for the `slice` method
- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.
- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search
- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.
- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`
## Checklist:
- [x] implement interpolation search
- [x] use `datasets.table.Table` in `Dataset` objects
- [x] update current tests
- [x] add tests for interpolation search
- [x] comments and docstring
- [x] add the benchmark to the CI
Fix #1803. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5205/comments | https://api.github.com/repos/huggingface/datasets/issues/5205/events | https://github.com/huggingface/datasets/pull/5205 | 1,437,221,987 | PR_kwDODunzps5CRO33 | 5,205 | Add missing `DownloadConfig.use_auth_token` value | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-05T23:36:36Z | 2022-11-08T08:13:00Z | 2022-11-07T16:20:24Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5205",
"merged_at": "2022-11-07T16:20:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5205"
} | This PR solves https://github.com/huggingface/datasets/issues/5204
Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}",
"followers_url": "https://api.github.com/users/riteshkumarumassedu/followers",
"following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}",
"gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riteshkumarumassedu",
"id": 43389071,
"login": "riteshkumarumassedu",
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs",
"received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events",
"repos_url": "https://api.github.com/users/riteshkumarumassedu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riteshkumarumassedu"
} | [] | open | false | null | [] | null | [
"Before dynamically loading `.py` scripts with `importlib.import_module`, we also parse their contents to check imports, which is tricky to implement for binary `.pyc` files (requires parsing bytecode), so I don't think this is something we want to support (unless more users request it ofc) as this use case is a bi... | 2023-08-29T19:35:06Z | 2023-08-31T19:47:29Z | null | NONE | null | null | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
### Steps to reproduce the bug
1. Create a dataset loading script to read the custom data.
2. compile the code to make sure that .pyc file is created
3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.
### Expected behavior
The code should make use of .pyc file and run without any error.
### Environment info
NA | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5253/comments | https://api.github.com/repos/huggingface/datasets/issues/5253/events | https://github.com/huggingface/datasets/pull/5253 | 1,452,588,206 | PR_kwDODunzps5DE2io | 5,253 | typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T02:22:58Z | 2022-11-18T10:53:11Z | 2022-11-18T10:53:10Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5253",
"merged_at": "2022-11-18T10:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5253"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5253/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1721/comments | https://api.github.com/repos/huggingface/datasets/issues/1721/events | https://github.com/huggingface/datasets/pull/1721 | 783,828,428 | MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5 | 1,721 | [Scientific papers] Mirror datasets zip | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically",
"That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 toke... | 2021-01-12T01:15:40Z | 2021-01-12T11:49:15Z | 2021-01-12T11:41:47Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1721.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1721",
"merged_at": "2021-01-12T11:41:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1721.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1721"
} | Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1721/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1789/comments | https://api.github.com/repos/huggingface/datasets/issues/1789/events | https://github.com/huggingface/datasets/pull/1789 | 796,229,721 | MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2 | 1,789 | [BUG FIX] typo in the import path for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | 2021-01-28T18:01:37Z | 2021-01-28T18:13:56Z | 2021-01-28T18:13:56Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789"
} | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1789/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2076/comments | https://api.github.com/repos/huggingface/datasets/issues/2076/events | https://github.com/huggingface/datasets/issues/2076 | 834,445,296 | MDU6SXNzdWU4MzQ0NDUyOTY= | 2,076 | Issue: Dataset download error | {
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/XuhuiZhou",
"id": 20436061,
"login": "XuhuiZhou",
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/XuhuiZhou"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [
"Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.",
"It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and th... | 2021-03-18T06:36:06Z | 2021-03-22T11:52:31Z | null | NONE | null | null | null | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2076/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5406/comments | https://api.github.com/repos/huggingface/datasets/issues/5406/events | https://github.com/huggingface/datasets/issues/5406 | 1,519,140,544 | I_kwDODunzps5ajD7A | 5,406 | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... | 2023-01-04T15:10:04Z | 2023-06-21T18:45:38Z | null | MEMBER | null | null | null | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadata of those datasets to a format that is not supported in 2.6.1 and 2.7.0
This change is required or those datasets won't be supported by the Hugging Face Hub.
Therefore if you encounter this error or if you're using `datasets` 2.6.1 or 2.7.0, we encourage you to update to a newer version.
For example, versions 2.6.2 and 2.7.1 patch this issue.
```python
pip install -U datasets
```
All the datasets affected are the ones with a ClassLabel feature type and YAML "dataset_info" metadata. More info [here](https://github.com/huggingface/datasets/issues/5275).
We apologize for the inconvenience. | {
"+1": 11,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 11,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5406/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2034/comments | https://api.github.com/repos/huggingface/datasets/issues/2034/events | https://github.com/huggingface/datasets/pull/2034 | 829,381,388 | MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw | 2,034 | Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4",
"events_url": "https://api.github.com/users/pcyin/events{/privacy}",
"followers_url": "https://api.github.com/users/pcyin/followers",
"following_url": "https://api.github.com/users/pcyin/following{/other_user}",
"gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pcyin",
"id": 3413464,
"login": "pcyin",
"node_id": "MDQ6VXNlcjM0MTM0NjQ=",
"organizations_url": "https://api.github.com/users/pcyin/orgs",
"received_events_url": "https://api.github.com/users/pcyin/received_events",
"repos_url": "https://api.github.com/users/pcyin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcyin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pcyin"
} | [] | closed | false | null | [] | null | [] | 2021-03-11T17:46:13Z | 2021-03-11T18:06:25Z | 2021-03-11T18:06:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"merged_at": "2021-03-11T18:06:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034"
} | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2034/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5091/comments | https://api.github.com/repos/huggingface/datasets/issues/5091/events | https://github.com/huggingface/datasets/pull/5091 | 1,401,112,552 | PR_kwDODunzps5AZCm9 | 5,091 | Allow connection objects in `from_sql` + small doc improvement | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-07T12:39:44Z | 2022-10-09T13:19:15Z | 2022-10-09T13:16:57Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5091.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5091",
"merged_at": "2022-10-09T13:16:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5091.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5091"
} | Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string.
PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~~ Done! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5091/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5091/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3074/comments | https://api.github.com/repos/huggingface/datasets/issues/3074/events | https://github.com/huggingface/datasets/pull/3074 | 1,025,940,085 | PR_kwDODunzps4tLbe- | 3,074 | add XCSR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42788901?v=4",
"events_url": "https://api.github.com/users/yangxqiao/events{/privacy}",
"followers_url": "https://api.github.com/users/yangxqiao/followers",
"following_url": "https://api.github.com/users/yangxqiao/following{/other_user}",
"gists_url": "https://api.github.com/users/yangxqiao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangxqiao",
"id": 42788901,
"login": "yangxqiao",
"node_id": "MDQ6VXNlcjQyNzg4OTAx",
"organizations_url": "https://api.github.com/users/yangxqiao/orgs",
"received_events_url": "https://api.github.com/users/yangxqiao/received_events",
"repos_url": "https://api.github.com/users/yangxqiao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangxqiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangxqiao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangxqiao"
} | [] | closed | false | null | [] | null | [
"> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are... | 2021-10-14T04:39:59Z | 2021-11-08T13:52:36Z | 2021-11-08T13:52:36Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3074",
"merged_at": "2021-11-08T13:52:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3074"
} | Hi,
I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :)
I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3074/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5483/comments | https://api.github.com/repos/huggingface/datasets/issues/5483/events | https://github.com/huggingface/datasets/issues/5483 | 1,560,894,690 | I_kwDODunzps5dCVzi | 5,483 | Unable to upload dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain"
} | [] | closed | false | null | [] | null | [
"Seems to work now, perhaps it was something internal with our university's network."
] | 2023-01-28T15:18:26Z | 2023-01-29T08:09:49Z | 2023-01-29T08:09:49Z | NONE | null | null | null | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.push_to_hub("ttt111")
/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`.
warnings.warn(
Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s]
Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s]
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status
response.raise_for_status()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object
return _upload_lfs_object(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object
lfs_upload(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload
_upload_single_part(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part
hf_raise_for_status(upload_res)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub
_retry(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry
return func(*func_args, **func_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file
commit_info = self.create_commit(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit
upload_lfs_files(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files
thread_map(
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object
raise RuntimeError(
RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub.
```
### Expected behavior
The dataset should be uploaded without any exceptions
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27
- Python version: 3.10.9
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5483/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5219/comments | https://api.github.com/repos/huggingface/datasets/issues/5219/events | https://github.com/huggingface/datasets/issues/5219 | 1,441,255,910 | I_kwDODunzps5V59Hm | 5,219 | Delta Tables usage using Datasets Library | {
"avatar_url": "https://avatars.githubusercontent.com/u/23002137?v=4",
"events_url": "https://api.github.com/users/reichenbch/events{/privacy}",
"followers_url": "https://api.github.com/users/reichenbch/followers",
"following_url": "https://api.github.com/users/reichenbch/following{/other_user}",
"gists_url": "https://api.github.com/users/reichenbch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/reichenbch",
"id": 23002137,
"login": "reichenbch",
"node_id": "MDQ6VXNlcjIzMDAyMTM3",
"organizations_url": "https://api.github.com/users/reichenbch/orgs",
"received_events_url": "https://api.github.com/users/reichenbch/received_events",
"repos_url": "https://api.github.com/users/reichenbch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/reichenbch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reichenbch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/reichenbch"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?",
"Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Dataset... | 2022-11-09T02:43:56Z | 2023-03-02T19:29:12Z | null | NONE | null | null | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering.
This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose.
### Your contribution
Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns.
I have basic idea about Delta Live Tables, would brush it easily for this feature. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5219/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T12:56:19Z | 2021-04-12T13:32:17Z | 2021-04-12T13:32:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201"
} | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6241/comments | https://api.github.com/repos/huggingface/datasets/issues/6241/events | https://github.com/huggingface/datasets/pull/6241 | 1,896,429,694 | PR_kwDODunzps5aVfl- | 6,241 | Remove unused global variables in `audio.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-09-14T12:06:32Z | 2023-09-15T15:57:10Z | 2023-09-15T15:46:07Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6241",
"merged_at": "2023-09-15T15:46:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6241"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6241/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4149/comments | https://api.github.com/repos/huggingface/datasets/issues/4149/events | https://github.com/huggingface/datasets/issues/4149 | 1,201,389,221 | I_kwDODunzps5Hm76l | 4,149 | load_dataset for winoground returning decoding error | {
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/odellus",
"id": 4686956,
"login": "odellus",
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"repos_url": "https://api.github.com/users/odellus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/odellus"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n... | 2022-04-12T08:16:16Z | 2022-05-04T23:40:38Z | 2022-05-04T23:40:38Z | CONTRIBUTOR | null | null | null | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected results
I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing
```python
import json
with open('examples.jsonl', 'r') as f:
examples = f.read().split('\n')
# Thinking this would error if the JSON is not utf-8 encoded
json_data = [json.loads(x) for x in examples]
print(json_data[-1])
```
and I see
```python
{'caption_0': 'someone is overdoing it',
'caption_1': 'someone is doing it over',
'collapsed_tag': 'Relation',
'id': 399,
'image_0': 'ex_399_img_0',
'image_1': 'ex_399_img_1',
'num_main_preds': 1,
'secondary_tag': 'Morpheme-Level',
'tag': 'Scope, Preposition'}
```
so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.
## Actual results
During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).
```
datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files)
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4149/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2263/comments | https://api.github.com/repos/huggingface/datasets/issues/2263/events | https://github.com/huggingface/datasets/pull/2263 | 867,420,912 | MDExOlB1bGxSZXF1ZXN0NjIzMDk0NTcy | 2,263 | test data added, dataset_infos updated | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | 2021-04-26T08:27:18Z | 2021-04-29T09:30:21Z | 2021-04-29T09:30:20Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2263.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2263",
"merged_at": "2021-04-29T09:30:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2263.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2263"
} | Fixes #2262. Thanks for pointing out issue with dataset @jinmang2! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2263/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5145/comments | https://api.github.com/repos/huggingface/datasets/issues/5145/events | https://github.com/huggingface/datasets/issues/5145 | 1,418,005,452 | I_kwDODunzps5UhQvM | 5,145 | Dataset order is not deterministic with ZIP archives and `iter_files` | {
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fxmarty",
"id": 9808326,
"login": "fxmarty",
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fxmarty"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank y... | 2022-10-21T09:00:03Z | 2022-10-27T09:51:49Z | 2022-10-27T09:51:10Z | CONTRIBUTOR | null | null | null | ### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5145/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": fals... | open | false | null | [] | null | [
"Related: #3144 "
] | 2021-10-22T13:13:36Z | 2021-10-22T13:17:56Z | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5368/comments | https://api.github.com/repos/huggingface/datasets/issues/5368/events | https://github.com/huggingface/datasets/pull/5368 | 1,500,322,973 | PR_kwDODunzps5FpZyx | 5,368 | Align remove columns behavior and input dict mutation in `map` with previous behavior | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-16T14:28:47Z | 2022-12-16T16:28:08Z | 2022-12-16T16:25:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5368",
"merged_at": "2022-12-16T16:25:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5368"
} | Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3406/comments | https://api.github.com/repos/huggingface/datasets/issues/3406/events | https://github.com/huggingface/datasets/pull/3406 | 1,074,366,050 | PR_kwDODunzps4vjV21 | 3,406 | Fix module inference for archive with a directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-12-08T12:39:12Z | 2021-12-08T13:03:30Z | 2021-12-08T13:03:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3406.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3406",
"merged_at": "2021-12-08T13:03:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3406.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3406"
} | Fix module inference for an archive file that contains files within a directory.
Fix #3405. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3406/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HyukjinKwon",
"id": 6477701,
"login": "HyukjinKwon",
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HyukjinKwon"
} | [] | closed | false | null | [] | null | [] | 2023-05-09T06:40:34Z | 2023-05-09T06:40:47Z | 2023-05-09T06:40:47Z | NONE | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3826/comments | https://api.github.com/repos/huggingface/datasets/issues/3826/events | https://github.com/huggingface/datasets/pull/3826 | 1,159,851,110 | PR_kwDODunzps4z90JU | 3,826 | Add IterableDataset.filter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3826). All of your documentation changes will be reflected on that endpoint.",
"Indeed ! If `batch_size` is `None` or `<=0` then the full dataset should be passed. It's been mentioned in the docs for a while but never actually ... | 2022-03-04T16:57:23Z | 2022-03-09T17:23:13Z | 2022-03-09T17:23:11Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3826",
"merged_at": "2022-03-09T17:23:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3826"
} | _Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_
I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`:
```python
def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None):
```
TODO:
- [x] tests
- [x] docs
related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3826/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5129/comments | https://api.github.com/repos/huggingface/datasets/issues/5129/events | https://github.com/huggingface/datasets/issues/5129 | 1,413,031,664 | I_kwDODunzps5UOSbw | 5,129 | unexpected `cast` or `class_encode_column` result after `rename_column` | {
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/quaeast",
"id": 35144675,
"login": "quaeast",
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"repos_url": "https://api.github.com/users/quaeast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/quaeast"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configu... | 2022-10-18T11:15:24Z | 2022-10-19T03:02:26Z | 2022-10-19T03:02:26Z | NONE | null | null | null | ## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5129/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4982/comments | https://api.github.com/repos/huggingface/datasets/issues/4982/events | https://github.com/huggingface/datasets/issues/4982 | 1,375,604,693 | I_kwDODunzps5R_g_V | 4,982 | Create dataset_infos.json with VALIDATION and TEST splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4",
"events_url": "https://api.github.com/users/skalinin/events{/privacy}",
"followers_url": "https://api.github.com/users/skalinin/followers",
"following_url": "https://api.github.com/users/skalinin/following{/other_user}",
"gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/skalinin",
"id": 26695348,
"login": "skalinin",
"node_id": "MDQ6VXNlcjI2Njk1MzQ4",
"organizations_url": "https://api.github.com/users/skalinin/orgs",
"received_events_url": "https://api.github.com/users/skalinin/received_events",
"repos_url": "https://api.github.com/users/skalinin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skalinin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/skalinin"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it wo... | 2022-09-16T08:21:19Z | 2022-09-28T07:59:39Z | 2022-09-28T07:59:39Z | NONE | null | null | null | The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569).
> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:
> ValueError: Unknown split "test". Should be one of ['train'].
>
> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN
>
> You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)
I tried to clear the cache folder, than I got an another error. I run:
```
git clone https://huggingface.co/datasets/sberbank-ai/Peter
cd Peter
git checkout add_splits # switch to a add_splits branch
rm dataset_infos.json # remove local dataset_infos.json
rm -r ~/.cache/huggingface # remove cached dataset_infos.json
datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json
```
The error message:
```
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s]
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/local/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run
builder.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators
data_files = dl_manager.download_and_extract(_URLS)
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract
extracted_paths = map_nested(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
mapped = [
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path
output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract
self.extractor.extract(input_path, output_path, extractor_format)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract
with FileLock(lock_path):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
FileNotFoundError: [Errno 2] No such file or directory: ''
Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__
self.release(force=True)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release
with self._thread_lock:
AttributeError: 'UnixFileLock' object has no attribute '_thread_lock'
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]
```
Can you help me please?
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.5
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4982/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3797/comments | https://api.github.com/repos/huggingface/datasets/issues/3797/events | https://github.com/huggingface/datasets/pull/3797 | 1,154,383,063 | PR_kwDODunzps4zrgAD | 3,797 | Reddit dataset card contribution | {
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anna-kay",
"id": 56791604,
"login": "anna-kay",
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anna-kay"
} | [] | closed | false | null | [] | null | [] | 2022-02-28T17:53:18Z | 2023-03-09T22:08:58Z | 2022-03-01T12:58:57Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3797.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3797",
"merged_at": "2022-03-01T12:58:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3797.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3797"
} | Description tags for webis-tldr-17 added. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3797/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6237/comments | https://api.github.com/repos/huggingface/datasets/issues/6237/events | https://github.com/huggingface/datasets/issues/6237 | 1,893,822,321 | I_kwDODunzps5w4W9x | 6,237 | Tokenization with multiple workers is too slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/macabdul9",
"id": 25720695,
"login": "macabdul9",
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/macabdul9"
} | [] | closed | false | null | [] | null | [
"[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are im... | 2023-09-13T06:18:34Z | 2023-09-19T21:54:58Z | 2023-09-19T21:54:58Z | NONE | null | null | null | I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever.
Code snippet:
```
raw_datasets.map(
encode_function,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
remove_columns=[name for name in raw_datasets["train"].column_names if name not in ["input_ids", "labels", "attention_mask"]],
desc="Tokenizing data",
)
```
Details:
```
transformers==4.28.0.dev0
datasets==4.28.0.dev0
preprocessing_num_workers==48
```
tokenizer == decapoda-research/llama-7b-hf
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6237/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5828/comments | https://api.github.com/repos/huggingface/datasets/issues/5828/events | https://github.com/huggingface/datasets/issues/5828 | 1,699,235,739 | I_kwDODunzps5lSEeb | 5,828 | Stream data concatenation issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krishnapriya-18",
"id": 48817796,
"login": "krishnapriya-18",
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krishnapriya-18"
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```",
"Thanks it is solved",
"Hi! \r\nI have run into the same problem with you. Could you please let me know how you solve ... | 2023-05-07T21:02:54Z | 2023-06-29T20:07:56Z | 2023-05-10T05:05:47Z | NONE | null | null | null | ### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5828/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3063/comments | https://api.github.com/repos/huggingface/datasets/issues/3063/events | https://github.com/huggingface/datasets/issues/3063 | 1,023,588,297 | I_kwDODunzps49ArfJ | 3,063 | Windows CI is unable to test streaming properly because of SSL issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTP... | 2021-10-12T09:33:40Z | 2022-08-24T14:59:29Z | 2022-08-24T14:59:29Z | MEMBER | null | null | null | In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443
The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works.
to reproduce on windows:
```python
import fsspec
# use any URL to a file in a dataset repo
url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes"
fsspec.open(url).open()
```
raises
```python
FileNotFoundError: https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes
```
because of
```python
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3063/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5576/comments | https://api.github.com/repos/huggingface/datasets/issues/5576/events | https://github.com/huggingface/datasets/issues/5576 | 1,598,582,744 | I_kwDODunzps5fSG_Y | 5,576 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. | {
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wjfwzzc",
"id": 5126316,
"login": "wjfwzzc",
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wjfwzzc"
} | [] | closed | false | null | [] | null | [
"Duplicated issue."
] | 2023-02-24T12:57:49Z | 2023-02-24T12:58:31Z | 2023-02-24T12:58:18Z | NONE | null | null | null | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).
_Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5576/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/3686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3686/comments | https://api.github.com/repos/huggingface/datasets/issues/3686/events | https://github.com/huggingface/datasets/issues/3686 | 1,127,137,290 | I_kwDODunzps5DLsAK | 3,686 | `Translation` features cannot be `flatten`ed | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`"
] | 2022-02-08T11:33:48Z | 2022-03-18T17:28:13Z | 2022-03-18T17:28:13Z | CONTRIBUTOR | null | null | null | ## Describe the bug
(`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8]
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]")
print(dataset.features)
# {'translation': Translation(languages=['en', 'fr'], id=None)}
print(dataset[0])
# {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}}
dataset.flatten()
```
## Expected results
`dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")`
```python
dataset[0]
# {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' }
dataset.features
# {'translation.en': Value("string"), 'translation.fr': Value("string")}
```
## Actual results
```python
In [31]: dset.flatten()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-31-bb88eb5276ee> in <module>
----> 1 dset.flatten()
[...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
[...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth)
1294 break
1295 dataset.info.features = self.features.flatten(max_depth=max_depth)
-> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features)
1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.')
1298 dataset._fingerprint = new_fingerprint
[...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features)
534 def update_metadata_with_features(table: Table, features: Features):
535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema."""
--> 536 features = Features({col_name: features[col_name] for col_name in table.column_names})
537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata:
538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features))
[...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0)
534 def update_metadata_with_features(table: Table, features: Features):
535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema."""
--> 536 features = Features({col_name: features[col_name] for col_name in table.column_names})
537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata:
538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features))
KeyError: 'translation.en'
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3686/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3944/comments | https://api.github.com/repos/huggingface/datasets/issues/3944/events | https://github.com/huggingface/datasets/pull/3944 | 1,171,209,510 | PR_kwDODunzps40iu4n | 3,944 | Create README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-16T15:46:26Z | 2022-03-17T17:50:54Z | 2022-03-17T17:47:05Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3944",
"merged_at": "2022-03-17T17:47:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3944"
} | Proposing COMET metric card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3944/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1742/comments | https://api.github.com/repos/huggingface/datasets/issues/1742/events | https://github.com/huggingface/datasets/pull/1742 | 787,623,640 | MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw | 1,742 | Add GLUE Compat (compatible with transformers<3.5.0) | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JetRunner",
"id": 22514219,
"login": "JetRunner",
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JetRunner"
} | [] | closed | false | null | [] | null | [
"Maybe it would be simpler to just overwrite the order of the label classes of the `glue` dataset ?\r\n```python\r\nmnli = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```",
"Sounds good. Will close the issue if that works."
] | 2021-01-17T05:54:25Z | 2023-09-24T09:52:12Z | 2021-03-29T12:43:30Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1742",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1742"
} | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MNLI and AX. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1742/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3894/comments | https://api.github.com/repos/huggingface/datasets/issues/3894/events | https://github.com/huggingface/datasets/pull/3894 | 1,166,611,270 | PR_kwDODunzps40TzXW | 3,894 | [docs] make dummy data creation optional | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page... | 2022-03-11T16:21:34Z | 2022-03-11T17:27:56Z | 2022-03-11T17:27:55Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"merged_at": "2022-03-11T17:27:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3894"
} | Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4693/comments | https://api.github.com/repos/huggingface/datasets/issues/4693/events | https://github.com/huggingface/datasets/pull/4693 | 1,306,788,322 | PR_kwDODunzps47go-F | 4,693 | update `samsum` script | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n"
] | 2022-07-16T11:53:05Z | 2022-09-23T11:40:11Z | 2022-09-23T11:37:57Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4693",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4693"
} | update `samsum` script after #4672 was merged (citation is also updated) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4693/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5335/comments | https://api.github.com/repos/huggingface/datasets/issues/5335/events | https://github.com/huggingface/datasets/pull/5335 | 1,478,890,788 | PR_kwDODunzps5EeHdA | 5,335 | Update tasks.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n\r\nAnd I think we can remove tasks.json completely from this repo",
"Isn't tasks.json used ... | 2022-12-06T11:37:57Z | 2023-09-24T10:06:42Z | 2022-12-07T12:46:03Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5335",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5335"
} | Context:
* https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195
Cc: @osanseviero | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5335/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2021-04-07T09:58:16Z | 2021-04-20T10:04:04Z | 2021-04-20T10:04:03Z | MEMBER | null | null | null | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk)
- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.
Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2368/comments | https://api.github.com/repos/huggingface/datasets/issues/2368/events | https://github.com/huggingface/datasets/pull/2368 | 893,411,076 | MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0 | 2,368 | Allow "other-X" in licenses | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [] | 2021-05-17T14:47:54Z | 2021-05-17T16:36:27Z | 2021-05-17T16:36:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2368",
"merged_at": "2021-05-17T16:36:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2368"
} | This PR allows "other-X" licenses during metadata validation.
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1427/comments | https://api.github.com/repos/huggingface/datasets/issues/1427/events | https://github.com/huggingface/datasets/pull/1427 | 760,736,703 | MDExOlB1bGxSZXF1ZXN0NTM1NTE4MzAx | 1,427 | Hebrew project BenYehuda | {
"avatar_url": "https://avatars.githubusercontent.com/u/10088963?v=4",
"events_url": "https://api.github.com/users/imvladikon/events{/privacy}",
"followers_url": "https://api.github.com/users/imvladikon/followers",
"following_url": "https://api.github.com/users/imvladikon/following{/other_user}",
"gists_url": "https://api.github.com/users/imvladikon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/imvladikon",
"id": 10088963,
"login": "imvladikon",
"node_id": "MDQ6VXNlcjEwMDg4OTYz",
"organizations_url": "https://api.github.com/users/imvladikon/orgs",
"received_events_url": "https://api.github.com/users/imvladikon/received_events",
"repos_url": "https://api.github.com/users/imvladikon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/imvladikon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imvladikon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/imvladikon"
} | [] | closed | false | null | [] | null | [
"merging since the CI is fixed on master"
] | 2020-12-09T22:59:17Z | 2020-12-11T17:39:23Z | 2020-12-11T17:39:23Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1427",
"merged_at": "2020-12-11T17:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1427"
} | Added Hebrew corpus from https://github.com/projectbenyehuda/public_domain_dump | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1427/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3248/comments | https://api.github.com/repos/huggingface/datasets/issues/3248/events | https://github.com/huggingface/datasets/pull/3248 | 1,050,171,082 | PR_kwDODunzps4uXZzU | 3,248 | Stream from Google Drive and other hosts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow",
"I'm fixing the remaining files based on TAR archives",
"THANKS A LOT"
] | 2021-11-10T18:32:32Z | 2021-11-30T16:03:43Z | 2021-11-12T17:18:11Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3248",
"merged_at": "2021-11-12T17:18:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3248"
} | Streaming from Google Drive is a bit more challenging than the other host we've been supporting:
- the download URL must be updated to add the confirm token obtained by HEAD request
- it requires to use cookies to keep the connection alive
- the URL doesn't tell any information about whether the file is compressed or not
Therefore I did two things:
- I added a step for URL and headers/cookies preparation in the StreamingDownloadManager
- I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures)
This allows to do do fancy things like
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob
# zip file containing a train.tsv file
url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh"
extracted = StreamingDownloadManager().download_and_extract(url)
for inner_file in xglob(xjoin(extracted, "*.tsv")):
with xopen(inner_file) as f:
# streaming starts here
for line in f:
print(line)
```
This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list:
```
amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail,
code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans,
code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14,
gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018,
igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa,
mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary,
poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo,
search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner,
twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018,
wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3
```
Some of them may not work if the host doesn't support HTTP range requests for example
Fix https://github.com/huggingface/datasets/issues/2742
Fix https://github.com/huggingface/datasets/issues/3188 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3248/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3004/comments | https://api.github.com/repos/huggingface/datasets/issues/3004/events | https://github.com/huggingface/datasets/pull/3004 | 1,014,336,617 | PR_kwDODunzps4smfPF | 3,004 | LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | closed | false | null | [] | null | [
"Please wait until Tuesday. Arxiv pre-print is pending. 🤗 ",
"Hi @lhoestq, I updated the README with the Arxiv publication info and now the tests are not passing.\r\n\r\nIt seems that the error is completely irrelevant to my code:\r\n\r\n```\r\n Attempting uninstall: ruamel.yaml\r\n Found existing installatio... | 2021-10-03T10:03:25Z | 2021-10-13T13:37:02Z | 2021-10-13T13:37:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3004",
"merged_at": "2021-10-13T13:37:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3004"
} | Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or ‘foundation’) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3004/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6072/comments | https://api.github.com/repos/huggingface/datasets/issues/6072/events | https://github.com/huggingface/datasets/pull/6072 | 1,822,123,560 | PR_kwDODunzps5WbWFN | 6,072 | Fix fsspec storage_options from load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-07-26T10:44:23Z | 2023-07-27T12:51:51Z | 2023-07-27T12:42:57Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6072",
"merged_at": "2023-07-27T12:42:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6072"
} | close https://github.com/huggingface/datasets/issues/6071 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6072/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2850/comments | https://api.github.com/repos/huggingface/datasets/issues/2850/events | https://github.com/huggingface/datasets/issues/2850 | 982,654,644 | MDU6SXNzdWU5ODI2NTQ2NDQ= | 2,850 | Wound segmentation datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | [] | null | [] | 2021-08-30T10:44:32Z | 2021-12-08T12:02:00Z | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Wound segmentation datasets
- **Description:** annotated wound image dataset
- **Paper:** https://www.nature.com/articles/s41598-020-78799-w
- **Data:** https://github.com/uwm-bigdata/wound-segmentation
- **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2850/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3304/comments | https://api.github.com/repos/huggingface/datasets/issues/3304/events | https://github.com/huggingface/datasets/issues/3304 | 1,059,130,494 | I_kwDODunzps4_IQx- | 3,304 | Dataset object has no attribute `to_tf_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4",
"events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}",
"followers_url": "https://api.github.com/users/RajkumarGalaxy/followers",
"following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}",
"gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RajkumarGalaxy",
"id": 59993678,
"login": "RajkumarGalaxy",
"node_id": "MDQ6VXNlcjU5OTkzNjc4",
"organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs",
"received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events",
"repos_url": "https://api.github.com/users/RajkumarGalaxy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RajkumarGalaxy"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"The issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n```\r\n# upgrade transformers and datasets to latest versions\r\n!pip install --upgrade transformers\r\n!pip install --upgrade datasets\r\n```\r\n\r\nRegards!"
] | 2021-11-20T12:03:59Z | 2021-11-21T07:07:25Z | 2021-11-21T07:07:25Z | NONE | null | null | null | I am following HuggingFace Course. I am at Fine-tuning a model.
Link: https://huggingface.co/course/chapter3/2?fw=tf
I use tokenize_function and `map` as mentioned in the course to process data.
`# define a tokenize function`
`def Tokenize_function(example):`
` return tokenizer(example['sentence'], truncation=True)`
`# tokenize entire data`
`tokenized_data = raw_data.map(Tokenize_function, batched=True)`
I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error.
`# convert to TF dataset`
`train_data = tokenized_data["train"].to_tf_dataset( `
` columns = ['attention_mask', 'input_ids', 'token_type_ids'], `
` label_cols = ['label'], `
` shuffle = True, `
` collate_fn = data_collator, `
` batch_size = 8 `
`)`
Output:
`---------------------------------------------------------------------------`
`AttributeError Traceback (most recent call last)`
`/tmp/ipykernel_42/103099799.py in <module>`
` 1 # convert to TF dataset`
`----> 2 train_data = tokenized_data["train"].to_tf_dataset( \`
` 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \`
` 4 label_cols = ['label'], \`
` 5 shuffle = True, \`
`AttributeError: 'Dataset' object has no attribute 'to_tf_dataset'`
When I look for `dir(tokenized_data["train"])`, there is no method or attribute in the name of `to_tf_dataset`.
Why do I get this error? And how to clear this?
Please help me. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3304/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.