url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3566/events
|
https://github.com/huggingface/datasets/pull/3566
| 1,100,155,902
|
PR_kwDODunzps4w2Tcc
| 3,566
|
Add initial electricity time series dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kashif",
"id": 8100,
"login": "kashif",
"node_id": "MDQ6VXNlcjgxMDA=",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"repos_url": "https://api.github.com/users/kashif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kashif"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n",
"making a new PR"
] | 2022-01-12T10:21:32Z
| 2022-02-15T13:31:48Z
| 2022-02-15T13:31:48Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3566",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3566"
}
|
Here is an initial prototype time series dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3566/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/929/events
|
https://github.com/huggingface/datasets/pull/929
| 753,737,794
|
MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3
| 929
|
Add weibo NER dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-11-30T19:22:47Z
| 2020-12-03T13:36:55Z
| 2020-12-03T13:36:54Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/929",
"merged_at": "2020-12-03T13:36:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/929"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/929/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/5222
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5222/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5222/events
|
https://github.com/huggingface/datasets/issues/5222
| 1,442,412,507
|
I_kwDODunzps5V-Xfb
| 5,222
|
HuggingFace website is incorrectly reporting that my datasets are pickled
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4",
"events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}",
"followers_url": "https://api.github.com/users/ProGamerGov/followers",
"following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}",
"gists_url": "https://api.github.com/users/ProGamerGov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ProGamerGov",
"id": 10626398,
"login": "ProGamerGov",
"node_id": "MDQ6VXNlcjEwNjI2Mzk4",
"organizations_url": "https://api.github.com/users/ProGamerGov/orgs",
"received_events_url": "https://api.github.com/users/ProGamerGov/received_events",
"repos_url": "https://api.github.com/users/ProGamerGov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ProGamerGov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ProGamerGov"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that for now, as it indicates that we checked for pickles and nothing dangerous appeared :)",
"Closing the issue with the typical \"feature not a bug\" "
] | 2022-11-09T16:41:16Z
| 2022-11-09T18:10:46Z
| 2022-11-09T18:06:57Z
|
NONE
| null | null | null |
### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images
### Expected behavior
They should not be reported as being pickled.
### Environment info
N/A
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5222/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4840
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4840/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4840/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4840/events
|
https://github.com/huggingface/datasets/issues/4840
| 1,337,342,672
|
I_kwDODunzps5PtjrQ
| 4,840
|
Dataset Viewer issue for darragh/demo_data_raw3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[] |
open
| false
| null |
[] | null |
[
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ",
"OK, I get now your error when not streaming.",
"OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/487c39d87998f8d5a35972f1027d6c8e588e622d/services/worker/poetry.lock#L1537-L1543",
"Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```"
] | 2022-08-12T15:22:58Z
| 2022-09-08T07:55:44Z
| null |
CONTRIBUTOR
| null | null | null |
### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4840/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4840/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3787
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3787/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3787/events
|
https://github.com/huggingface/datasets/pull/3787
| 1,150,235,569
|
PR_kwDODunzps4zdE7b
| 3,787
|
Fix Google Drive URL to avoid Virus scan warning
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for this @albertvillanova!",
"Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"Thanks, that solved a bunch of problems we had downstream!\r\ncf. https://github.com/ElementAI/picard/issues/61"
] | 2022-02-25T09:35:12Z
| 2022-03-04T20:43:32Z
| 2022-02-25T11:56:35Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3787",
"merged_at": "2022-02-25T11:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3787"
}
|
This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs.
Fix #3786, fix #3784.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3787/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1435
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1435/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1435/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1435/events
|
https://github.com/huggingface/datasets/pull/1435
| 760,867,325
|
MDExOlB1bGxSZXF1ZXN0NTM1NjIwODE4
| 1,435
|
Add FreebaseQA dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4",
"events_url": "https://api.github.com/users/anaerobeth/events{/privacy}",
"followers_url": "https://api.github.com/users/anaerobeth/followers",
"following_url": "https://api.github.com/users/anaerobeth/following{/other_user}",
"gists_url": "https://api.github.com/users/anaerobeth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anaerobeth",
"id": 3663322,
"login": "anaerobeth",
"node_id": "MDQ6VXNlcjM2NjMzMjI=",
"organizations_url": "https://api.github.com/users/anaerobeth/orgs",
"received_events_url": "https://api.github.com/users/anaerobeth/received_events",
"repos_url": "https://api.github.com/users/anaerobeth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anaerobeth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anaerobeth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anaerobeth"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@yjernite @lhoestq Any suggestions on how to get the dummy data generator to recognize the columns? The structure of the json is:\r\n```\r\n{\r\n \"Dataset\": \"FreebaseQA-eval\", \r\n \"Version\": \"1.0\", \r\n \"Questions\": [\r\n {\r\n \"Question-ID\": \"FreebaseQA-eval-0\", \r\n \"RawQuestion\": \"Who is the female presenter of the Channel 4 quiz show '1001 things you should know'?\", \r\n \"ProcessedQuestion\": \"who is the female presenter of the channel 4 quiz show '1001 things you should know'\", \r\n \"Parses\": [\r\n {\r\n \"Parse-Id\": \"FreebaseQA-eval-0.P0\", \r\n \"PotentialTopicEntityMention\": \"1001 things you should know\", \r\n \"TopicEntityName\": \"1001 things you should know\", \r\n \"TopicEntityMid\": \"m.0nd3t34\", \r\n \"InferentialChain\": \"tv.tv_program.regular_personal_appearances..tv.tv_regular_personal_appearance.person\", \r\n \"Answers\": [\r\n {\r\n \"AnswersMid\": \"m.0216y_\", \r\n \"AnswersName\": [\r\n \"sandi toksvig\"\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }, \r\n ...\r\n ]\r\n}\r\n```\r\n\r\nThanks!",
"Unfortunately this json structure is not recognized by the auto-generation yet, so you'd have to create the dummy data manually. \r\nYou can get some instructions on how to do that with: `python datasets-cli dummy_data datasets/freebase_qa`\r\nWe can definitely help you with that if there are too many files! ",
"@yjernite Thanks for the instructions. I manually added dummy data and created the zip file but one of the splits seem to return an empty list.\r\n\r\n```\r\ntests/test_dataset_common.py F [100%]\r\n\r\n========================= FAILURES ==========================\r\n_ LocalDatasetTest.test_load_dataset_all_configs_freebase_qa _\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_freebase_qa>\r\ndataset_name = 'freebase_qa'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:237:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\n\r\nNote that the dataset has `train`, `eval`, and `dev` (no test split). I am not sure if I am mapping them correctly when I called the Split Generator.\r\n",
"The dummy json files must follow the exact same structure as the original json files.\r\n\r\nHowever it looks like the dummy json files you have in your dummy_data.zip file are not structured the same way.\r\nFor example the original json is a dict with a field \"Questions\" that is a list of items.\r\nHowever your dummy json is simply a list of items.\r\n\r\nCan you update your dummy json files to follow the same structure ?",
"And I'm pretty sure that this structure is supported by the dummy data auto-generation tool\r\n```\r\npython datasets-cli dummy_data ./datasets/freebase_qa --json_field \"Questions\"\r\n```",
"Hi @anaerobeth did you manage to get the dummy data right ?\r\n\r\nFeel free to ping me if you have questions or when you're ready for a review",
"Thanks for your help! I am able to create the dummy data with the dict structure as suggested. I'll add the tags and update this PR shortly.",
"Also don't forget to run `make style` to fix the code formatting check in the CI :)",
"Hi @anaerobeth ! Have you had a chance to consider updating the dataset script to yield one example per question ?\r\n\r\nFeel free to ping me if you have questions or if I can help :) ",
"Hi @lhoestq,\r\n\r\nI am willing to take this forward if you and @anaerobeth don't mind.\r\n",
"Hi @gchhablani thanks for proposing your help :) \r\nSure if you want to take this forward feel free to do so.\r\nAlso pinging @anaerobeth to make sure that you both don't work on the same thing at the same time",
"Hi ! Closing this one since the dataset was added in #1814 \r\n\r\nThanks you two @anaerobeth and @gchhablani for adding this dataset !"
] | 2020-12-10T04:03:27Z
| 2021-02-05T09:47:30Z
| 2021-02-05T09:47:30Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1435.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1435",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1435.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1435"
}
|
This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph
Repo: https://github.com/kelvin-jiang/FreebaseQA
Paper: https://www.aclweb.org/anthology/N19-1028.pdf
## TODO: create dummy data
Error encountered when running `python datasets-cli dummy_data datasets/freebase_qa --auto_generate`
```
f"Couldn't parse columns {list(json_data.keys())}. "
ValueError: Couldn't parse columns ['Dataset', 'Version', 'Questions']. Maybe specify which json field must be used to read the data with --json_field <my_field>.
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1435/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1435/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1600
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1600/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1600/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1600/events
|
https://github.com/huggingface/datasets/issues/1600
| 770,582,960
|
MDU6SXNzdWU3NzA1ODI5NjA=
| 1,600
|
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
}
|
[
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so you will need to do something like this:\r\n```python\r\ndataset_dict = load_dataset(`'csv', data_files='data.txt')\r\ndataset = dataset_dict['split name, eg train']\r\ndataset.train_test_split(test_size=0.1)\r\n```\r\n\r\nPlease let me know if this helps. 🙂 ",
"Thanks, that's working - the same issue also tripped me up with training. \r\n\r\nI also agree https://github.com/huggingface/datasets/issues/767 would be a useful addition. ",
"Closing this now",
"> ```python\r\n> dataset_dict = load_dataset(`'csv', data_files='data.txt')\r\n> dataset = dataset_dict['split name, eg train']\r\n> dataset.train_test_split(test_size=0.1)\r\n> ```\r\n\r\nI am getting error like\r\nKeyError: 'split name, eg train'\r\nCould you please tell me how to solve this?",
"dataset = load_dataset('csv', data_files=['files/datasets/dataset.csv'])\r\ndataset = dataset['train']\r\ndataset = dataset.train_test_split(test_size=0.1)",
"!curl -L \"https://app.roboflow.com/ds/YQYgzFyKns?key=f0IwaEetrr\" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip\r\n\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/content/\")\r\ndataset[\"train\"][0]\r\n\r\ndataset[\"train\"][-1]\r\n\r\ntrain_ds = load_dataset(\"imagefolder\", data_dir=\"/content/train/\")\r\ntest_ds = load_dataset(\"imagefolder\", data_dir=\"/content/test/\")\r\nval_ds = load_dataset(\"imagefolder\", data_dir=\"/content/valid/\")\r\n\r\ntrain_ds.features\r\n\r\nand i got error \r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-6-289222110c33>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 train_ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'",
"This has been closed, you should open a new issue describing what your problem is."
] | 2020-12-18T05:37:10Z
| 2023-05-03T04:22:55Z
| 2020-12-21T07:38:58Z
|
NONE
| null | null | null |
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1600/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1600/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/103
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/103/comments
|
https://api.github.com/repos/huggingface/datasets/issues/103/events
|
https://github.com/huggingface/datasets/pull/103
| 618,233,637
|
MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy
| 103
|
[Manual downloads] add logic proposal for manual downloads and add wikihow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> ```\r\n> \r\n> I added/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir/wikihow`? ",
"> > Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~/manual_dir/wikihow\")` would work as well as any other path ;-) ",
"Perfect! You can merge!"
] | 2020-05-14T13:30:36Z
| 2020-05-14T14:27:41Z
| 2020-05-14T14:27:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/103",
"merged_at": "2020-05-14T14:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/103"
}
|
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.
The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.
The dataset can then be loaded via:
```python
import nlp
nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir")
```
I added/changed so that there are explicit error messages when using manually downloaded files.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/103/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/103/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1331
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1331/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1331/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1331/events
|
https://github.com/huggingface/datasets/pull/1331
| 759,677,189
|
MDExOlB1bGxSZXF1ZXN0NTM0NjQwMzc5
| 1,331
|
First version of the new dataset hausa_voa_topics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1858628?v=4",
"events_url": "https://api.github.com/users/michael-aloys/events{/privacy}",
"followers_url": "https://api.github.com/users/michael-aloys/followers",
"following_url": "https://api.github.com/users/michael-aloys/following{/other_user}",
"gists_url": "https://api.github.com/users/michael-aloys/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/michael-aloys",
"id": 1858628,
"login": "michael-aloys",
"node_id": "MDQ6VXNlcjE4NTg2Mjg=",
"organizations_url": "https://api.github.com/users/michael-aloys/orgs",
"received_events_url": "https://api.github.com/users/michael-aloys/received_events",
"repos_url": "https://api.github.com/users/michael-aloys/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/michael-aloys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michael-aloys/subscriptions",
"type": "User",
"url": "https://api.github.com/users/michael-aloys"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-08T18:28:52Z
| 2020-12-10T11:09:53Z
| 2020-12-10T11:09:53Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1331",
"merged_at": "2020-12-10T11:09:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1331"
}
|
Contains loading script as well as dataset card including YAML tags.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1331/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1331/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1499
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1499/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1499/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1499/events
|
https://github.com/huggingface/datasets/pull/1499
| 763,464,693
|
MDExOlB1bGxSZXF1ZXN0NTM3OTIyNjA3
| 1,499
|
update the dataset id_newspapers_2018
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-12T08:47:12Z
| 2020-12-14T15:28:07Z
| 2020-12-14T15:28:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1499.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1499",
"merged_at": "2020-12-14T15:28:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1499.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1499"
}
|
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1499/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1499/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2854
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2854/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2854/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2854/events
|
https://github.com/huggingface/datasets/pull/2854
| 983,726,084
|
MDExOlB1bGxSZXF1ZXN0NzIzMjU3NDg5
| 2,854
|
Fix caching when moving script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Merging since the CI failure is unrelated to this PR"
] | 2021-08-31T10:58:35Z
| 2021-08-31T13:13:36Z
| 2021-08-31T13:13:36Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2854.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2854",
"merged_at": "2021-08-31T13:13:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2854.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2854"
}
|
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code.
Using the full path of the python script for the location of the code makes the hash change if a script like `run_mlm.py` is moved.
I changed this by simply using the base name of the script instead of the full path.
Note that this change also affects the hash of the code used from imported modules, but I think it's fine. Indeed it hashes the code of the imported modules anyway, so the location of the python files of the imported modules doesn't matter when computing the hash.
Close https://github.com/huggingface/datasets/issues/2825
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2854/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2854/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/89
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/89/comments
|
https://api.github.com/repos/huggingface/datasets/issues/89/events
|
https://github.com/huggingface/datasets/pull/89
| 617,295,069
|
MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4
| 89
|
Add list and inspect methods - cleanup hf_api
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-13T09:30:15Z
| 2020-05-13T14:05:00Z
| 2020-05-13T09:33:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/89.diff",
"html_url": "https://github.com/huggingface/datasets/pull/89",
"merged_at": "2020-05-13T09:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/89.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/89"
}
|
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:
```python
nlp.list_datasets()
nlp.list_metrics()
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_dataset(path, local_path)
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_metric(path, local_path)
```
Also clean up the `HfAPI` to use `dataclasses` for better user-experience
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/89/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6465
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6465/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6465/events
|
https://github.com/huggingface/datasets/issues/6465
| 2,022,212,468
|
I_kwDODunzps54iIN0
| 6,465
|
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "https://api.github.com/users/mnoukhov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mnoukhov",
"id": 3391297,
"login": "mnoukhov",
"node_id": "MDQ6VXNlcjMzOTEyOTc=",
"organizations_url": "https://api.github.com/users/mnoukhov/orgs",
"received_events_url": "https://api.github.com/users/mnoukhov/received_events",
"repos_url": "https://api.github.com/users/mnoukhov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mnoukhov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnoukhov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mnoukhov"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-02T21:35:17Z
| 2023-12-04T16:13:10Z
| null |
NONE
| null | null | null |
### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. change the dataset and re-upload
4. redownload
```python
import time
from datasets import Dataset, DatasetDict, DownloadMode, load_dataset
username = "YOUR_USERNAME_HERE"
initial = Dataset.from_dict({"foo": [1, 2, 3]})
print(f"Intial {initial['foo']}")
initial_ds = DatasetDict({"train": initial})
initial_ds.push_to_hub("test")
time.sleep(1)
download = load_dataset(f"{username}/test", split="train")
changed = download.map(lambda x: {"foo": x["foo"] + 1})
print(f"Changed {changed['foo']}")
changed.push_to_hub("test")
time.sleep(1)
download_again = load_dataset(f"{username}/test", split="train")
print(f"Download Changed {download_again['foo']}")
# >>> gives the out-dated [1,2,3] when it should be changed [2,3,4]
```
The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset
```python
download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD)
print(f"Force Download Changed {download_again_force['foo']}")
# >>> [2,3,4]
```
### Expected behavior
I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match
### Environment info
- `datasets` version: 2.15.0 │
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 │
- Python version: 3.8.17 │
- `huggingface_hub` version: 0.19.4 │
- PyArrow version: 13.0.0 │
- Pandas version: 2.0.3 │
- `fsspec` version: 2023.6.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6465/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/837/events
|
https://github.com/huggingface/datasets/pull/837
| 740,250,215
|
MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5
| 837
|
AlloCiné dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-11-10T21:19:53Z
| 2020-11-25T21:56:27Z
| 2020-11-25T21:56:27Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/837.diff",
"html_url": "https://github.com/huggingface/datasets/pull/837",
"merged_at": "2020-11-25T21:56:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/837.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/837"
}
|
Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from?
I'm also wondering how best to go about talking about limitations when so little is known about the data.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/837/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3619
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3619/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3619/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3619/events
|
https://github.com/huggingface/datasets/pull/3619
| 1,112,611,415
|
PR_kwDODunzps4xfnCQ
| 3,619
|
fix meta in mls
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Feel free to merge @polinaeterna as soon as you got an approval from either @lhoestq , @albertvillanova or @mariosasko"
] | 2022-01-24T12:54:38Z
| 2022-01-24T20:53:22Z
| 2022-01-24T20:53:22Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3619.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3619",
"merged_at": "2022-01-24T20:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3619.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3619"
}
|
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3619/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3619/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2378
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2378/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2378/events
|
https://github.com/huggingface/datasets/issues/2378
| 895,131,774
|
MDU6SXNzdWU4OTUxMzE3NzQ=
| 2,378
|
Add missing dataset_infos.json files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[] | 2021-05-19T08:11:12Z
| 2021-05-19T08:11:12Z
| null |
MEMBER
| null | null | null |
Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPath('datasets/reclor/reclor.py')]
[PosixPath('datasets/json/README.md')]
[PosixPath('datasets/csv/README.md')]
[PosixPath('datasets/wikihow/wikihow.py'), PosixPath('datasets/wikihow/README.md')]
[PosixPath('datasets/c4/c4.py'), PosixPath('datasets/c4/README.md')]
[PosixPath('datasets/text/README.md')]
[PosixPath('datasets/lm1b/README.md'), PosixPath('datasets/lm1b/lm1b.py')]
[PosixPath('datasets/pandas/README.md')]
```
For `json`, `text`, csv`, and `pandas` this is expected, but not for the others which should be fixed
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2378/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3758/events
|
https://github.com/huggingface/datasets/issues/3758
| 1,143,366,393
|
I_kwDODunzps5EJmL5
| 3,758
|
head_qa file missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"https://user-images.githubusercontent.com/1676121/156000224-fd3f62c6-8b54-4df1-8911-bdcb0bac3f1a.png\">\r\n"
] | 2022-02-18T16:32:43Z
| 2022-02-28T14:29:18Z
| 2022-02-21T14:39:19Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expected results
The dataset should be loaded
## Actual results
```
Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Downloading data: 2.21kB [00:00, 2.05MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
```
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3758/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5039
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5039/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5039/events
|
https://github.com/huggingface/datasets/issues/5039
| 1,390,353,315
|
I_kwDODunzps5S3xuj
| 5,039
|
Hendrycks Checksum
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4",
"events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielHesslow/followers",
"following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielHesslow",
"id": 9974388,
"login": "DanielHesslow",
"node_id": "MDQ6VXNlcjk5NzQzODg=",
"organizations_url": "https://api.github.com/users/DanielHesslow/orgs",
"received_events_url": "https://api.github.com/users/DanielHesslow/received_events",
"repos_url": "https://api.github.com/users/DanielHesslow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielHesslow"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | 2022-09-29T06:56:20Z
| 2022-09-29T10:23:30Z
| 2022-09-29T10:04:20Z
|
NONE
| null | null | null |
Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.tar']
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5039/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6329
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6329/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6329/events
|
https://github.com/huggingface/datasets/issues/6329
| 1,955,858,020
|
I_kwDODunzps50lAZk
| 6,329
|
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shabnam706",
"id": 147399213,
"login": "shabnam706",
"node_id": "U_kgDOCMkiLQ",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shabnam706"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-10-22T11:07:46Z
| 2023-10-23T09:22:58Z
| 2023-10-23T09:22:58Z
|
NONE
| null | null | null |
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6329/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4257
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4257/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4257/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4257/events
|
https://github.com/huggingface/datasets/pull/4257
| 1,221,393,137
|
PR_kwDODunzps43GATC
| 4,257
|
Create metric card for Mahalanobis Distance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-29T18:37:27Z
| 2022-05-02T14:50:18Z
| 2022-05-02T14:43:24Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4257",
"merged_at": "2022-05-02T14:43:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4257"
}
|
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4257/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4257/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1814
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1814/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1814/events
|
https://github.com/huggingface/datasets/pull/1814
| 800,516,236
|
MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1
| 1,814
|
Add Freebase QA Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well."
] | 2021-02-03T16:57:49Z
| 2021-02-04T19:47:51Z
| 2021-02-04T16:21:48Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1814"
}
|
Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1814/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3038/events
|
https://github.com/huggingface/datasets/pull/3038
| 1,018,113,499
|
PR_kwDODunzps4syno_
| 3,038
|
add sberquad dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Alenush",
"id": 13781234,
"login": "Alenush",
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"repos_url": "https://api.github.com/users/Alenush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Alenush"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-06T11:33:39Z
| 2021-10-06T11:58:01Z
| 2021-10-06T11:58:01Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3038.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3038",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3038.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3038"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3038/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5524
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5524/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5524/events
|
https://github.com/huggingface/datasets/pull/5524
| 1,580,219,454
|
PR_kwDODunzps5JvbMw
| 5,524
|
[INVALID PR]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-02-10T19:35:50Z
| 2023-02-10T19:51:45Z
| 2023-02-10T19:49:12Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5524.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5524",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5524.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5524"
}
|
Hi to whoever is reading this! 🤗
## What's in this PR?
~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to check that the Python package installation succeeds before running the tests over the matrix of os?~~
~~So I just wanted to check whether the time was reduced doing this (which I assume it will), plus whether this is something that can be improved, or just discarded in case you're also using that step to make sure that the package can be installed.~~
## What's missing?
~~I was just wondering whether you consider replacing `isort` and `flake8` with `ruff` (if possible), since it's way faster, more information at [`ruff`](https://github.com/charliermarsh/ruff). Before creating this PR the average time of the `check_code_quality` job was around 40s.~~
## Edit
Sorry for the inconvenience this may have caused, didn't realise that the config is defined in `setup.cfg` and `pyproject.toml`, so running those without installing the Python package leads to failure, my bad 😞
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5524/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4453
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4453/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4453/events
|
https://github.com/huggingface/datasets/issues/4453
| 1,262,674,105
|
I_kwDODunzps5LQuC5
| 4,453
|
Dataset Viewer issue for Yaxin/SemEval2015
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WithYouTo",
"id": 18160852,
"login": "WithYouTo",
"node_id": "MDQ6VXNlcjE4MTYwODUy",
"organizations_url": "https://api.github.com/users/WithYouTo/orgs",
"received_events_url": "https://api.github.com/users/WithYouTo/received_events",
"repos_url": "https://api.github.com/users/WithYouTo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WithYouTo"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
"`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !",
"Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."
] | 2022-06-07T03:30:08Z
| 2022-06-09T08:34:16Z
| 2022-06-09T08:34:16Z
|
NONE
| null | null | null |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4453/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/802/events
|
https://github.com/huggingface/datasets/pull/802
| 736,296,343
|
MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0
| 802
|
Add XGlue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc."
] | 2020-11-04T17:29:54Z
| 2022-04-28T08:15:36Z
| 2020-12-01T15:58:27Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/802",
"merged_at": "2020-12-01T15:58:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/802"
}
|
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ...
```
=> therefore one can load a single language test via
```python
load_dataset("xglue", "ner", split="test.es")
```
Close #749.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/802/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3686
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3686/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3686/events
|
https://github.com/huggingface/datasets/issues/3686
| 1,127,137,290
|
I_kwDODunzps5DLsAK
| 3,686
|
`Translation` features cannot be `flatten`ed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[
"Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`"
] | 2022-02-08T11:33:48Z
| 2022-03-18T17:28:13Z
| 2022-03-18T17:28:13Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
(`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8]
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]")
print(dataset.features)
# {'translation': Translation(languages=['en', 'fr'], id=None)}
print(dataset[0])
# {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}}
dataset.flatten()
```
## Expected results
`dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")`
```python
dataset[0]
# {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' }
dataset.features
# {'translation.en': Value("string"), 'translation.fr': Value("string")}
```
## Actual results
```python
In [31]: dset.flatten()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-31-bb88eb5276ee> in <module>
----> 1 dset.flatten()
[...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
[...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth)
1294 break
1295 dataset.info.features = self.features.flatten(max_depth=max_depth)
-> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features)
1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.')
1298 dataset._fingerprint = new_fingerprint
[...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features)
534 def update_metadata_with_features(table: Table, features: Features):
535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema."""
--> 536 features = Features({col_name: features[col_name] for col_name in table.column_names})
537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata:
538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features))
[...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0)
534 def update_metadata_with_features(table: Table, features: Features):
535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema."""
--> 536 features = Features({col_name: features[col_name] for col_name in table.column_names})
537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata:
538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features))
KeyError: 'translation.en'
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.10
- PyArrow version: 3.0.0
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3686/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2370
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2370/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2370/events
|
https://github.com/huggingface/datasets/pull/2370
| 893,606,432
|
MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy
| 2,370
|
Adding HendrycksTest dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4",
"events_url": "https://api.github.com/users/andyzoujm/events{/privacy}",
"followers_url": "https://api.github.com/users/andyzoujm/followers",
"following_url": "https://api.github.com/users/andyzoujm/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzoujm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andyzoujm",
"id": 43451571,
"login": "andyzoujm",
"node_id": "MDQ6VXNlcjQzNDUxNTcx",
"organizations_url": "https://api.github.com/users/andyzoujm/orgs",
"received_events_url": "https://api.github.com/users/andyzoujm/received_events",
"repos_url": "https://api.github.com/users/andyzoujm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andyzoujm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzoujm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andyzoujm"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to).",
"I took a look at the dummy data and some csv lines were cropped. I fixed them :)",
"@andyzoujm Any reason why this dataset scrip was called \"hendrycks_test\" instead of \"mmlu\"?\r\n\r\nWe are thinking of renaming it...",
"That's because we didn't call it MMLU in the paper (the shorthand didn't\nemerge until over a year later), and people at OpenAI were calling it that.\n\nAndy\n\nOn Wed, Apr 26, 2023 at 8:44 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> @andyzoujm <https://github.com/andyzoujm> Any reason why this dataset\n> scrip was called \"hendrycks_test\" instead of \"mmlu\"?\n>\n> We are thinking of renaming it...\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2370#issuecomment-1523358110>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AKLQJMZZFZBGJTBOIFWJ5KDXDEKB5ANCNFSM45BAOSIQ>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Thanks for your reply. Just for the records: we have renamed it to \"cais/mmlu\": https://huggingface.co/datasets/cais/mmlu"
] | 2021-05-17T18:53:05Z
| 2023-05-11T05:42:57Z
| 2021-05-31T16:37:13Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2370",
"merged_at": "2021-05-31T16:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2370"
}
|
Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2370/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2602
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2602/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2602/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2602/events
|
https://github.com/huggingface/datasets/pull/2602
| 938,555,712
|
MDExOlB1bGxSZXF1ZXN0Njg0OTE5MjMy
| 2,602
|
Remove import of transformers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] |
{
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
}
|
[] | 2021-07-07T06:58:18Z
| 2021-07-12T14:10:22Z
| 2021-07-07T08:28:51Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2602",
"merged_at": "2021-07-07T08:28:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2602"
}
|
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers.
Related to huggingface/transformers#12549 and #502.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2602/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2602/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4782
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4782/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4782/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4782/events
|
https://github.com/huggingface/datasets/issues/4782
| 1,326,247,158
|
I_kwDODunzps5PDOz2
| 4,782
|
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/conceptofmind",
"id": 25208228,
"login": "conceptofmind",
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"type": "User",
"url": "https://api.github.com/users/conceptofmind"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @conceptofmind.\r\n\r\nCould you please give details about your environment? \r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```",
"Hi @albertvillanova ,\r\n\r\nHere is the environment information:\r\n```\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n```\r\nThanks,\r\n\r\nEnrico",
"I think this issue is solved here https://discuss.huggingface.co/t/minhash-deduplication/19992/12?u=loubnabnl, this only happens for very large datasets we will update it in CodeParrot code",
"Hi @loubnabnl,\r\n\r\nYes, the issue is solved in the discussion thread.\r\n\r\nI will close this issue.\r\n\r\nThank you again for all of your help.\r\n\r\nEnrico",
"Thanks @loubnabnl for pointing out the solution to this issue."
] | 2022-08-02T18:36:05Z
| 2022-08-22T09:46:28Z
| 2022-08-20T02:11:53Z
|
NONE
| null | null | null |
## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.unique("hash"))
```
Gists for minimum reproducible example:
https://gist.github.com/conceptofmind/c5804428ea1bd89767815f9cd5f02d9a
https://gist.github.com/conceptofmind/feafb07e236f28d79c2d4b28ffbdb6e2
## Expected results
Chunking and writing out a deduplicated dataset.
## Actual results
```
return dataset._data.column(column).unique().to_pylist()
File "pyarrow/table.pxi", line 394, in pyarrow.lib.ChunkedArray.unique
File "pyarrow/_compute.pyx", line 531, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 330, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 124, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4782/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4782/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/522
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/522/comments
|
https://api.github.com/repos/huggingface/datasets/issues/522/events
|
https://github.com/huggingface/datasets/issues/522
| 682,478,833
|
MDU6SXNzdWU2ODI0Nzg4MzM=
| 522
|
dictionnary typo in docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yonigottesman",
"id": 4004127,
"login": "yonigottesman",
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yonigottesman"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks!"
] | 2020-08-20T07:11:05Z
| 2020-08-20T07:52:14Z
| 2020-08-20T07:52:13Z
|
CONTRIBUTOR
| null | null | null |
Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/522/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/781
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/781/comments
|
https://api.github.com/repos/huggingface/datasets/issues/781/events
|
https://github.com/huggingface/datasets/pull/781
| 733,168,609
|
MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw
| 781
|
Add XNLI train set
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! Thanks for adding the translated MNLI! Do you know what translations system / model you used when you created the datasets in the other languages?",
"According to the [paper](https://arxiv.org/pdf/1809.05053.pdf) it's the result of the work of professional translators ;)",
"Thanks for getting back to me.\n\nThe training data is not from translators. And it appears to be machine\ntranslation for all languages. If we can know what system was used to\ncreate the training data that would be great!\n\nYifan.\n\n\nOn Thu, Jun 9, 2022, 05:51 Quentin Lhoest ***@***.***> wrote:\n\n> According to the paper <https://arxiv.org/pdf/1809.05053.pdf> it's the\n> result of the work of professional translators ;)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/781#issuecomment-1150914429>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAKLKWDAPTMGB6BE5GJ4GULVOG5BLANCNFSM4TE67NMQ>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"> The training data is not from translators.\r\n\r\nWhat makes you think that ? The paper litteraly says\r\n\r\n> we hire translators to translate the resulting sentences into 15 languages using the One Hour Translation platform.",
"However the annotators only did test and validation sets, as this was what\nin the paper: “we construct an evaluation set for XLU by extending the\ndevelopment and test sets of the Multi-Genre Natural Language Inference\nCorpus (MultiNLI) to 15 languages\".\n\nOn Thu, Jun 9, 2022 at 10:35 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> The training data is not from translators.\n>\n> What makes you think that ? The paper litteraly says\n>\n> we hire translators to translate the resulting sentences into 15 languages\n> using the One Hour Translation platform.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/781#issuecomment-1151202195>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAKLKWFZOQPLK4WSKFRLW6DVOH6LLANCNFSM4TE67NMQ>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n"
] | 2020-10-30T13:21:53Z
| 2022-06-09T23:26:46Z
| 2020-11-09T18:22:49Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"merged_at": "2020-11-09T18:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781"
}
|
I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/781/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4148
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4148/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4148/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4148/events
|
https://github.com/huggingface/datasets/issues/4148
| 1,201,169,242
|
I_kwDODunzps5HmGNa
| 4,148
|
fix confusing bleu metric example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4",
"events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}",
"followers_url": "https://api.github.com/users/aizawa-naoki/followers",
"following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}",
"gists_url": "https://api.github.com/users/aizawa-naoki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aizawa-naoki",
"id": 6253193,
"login": "aizawa-naoki",
"node_id": "MDQ6VXNlcjYyNTMxOTM=",
"organizations_url": "https://api.github.com/users/aizawa-naoki/orgs",
"received_events_url": "https://api.github.com/users/aizawa-naoki/received_events",
"repos_url": "https://api.github.com/users/aizawa-naoki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aizawa-naoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aizawa-naoki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aizawa-naoki"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-04-12T06:18:26Z
| 2022-04-13T14:16:34Z
| 2022-04-13T14:16:34Z
|
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this.
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
>>> references = [
... [["hello", "there", "general", "kenobi"]],
... [["foo", "bar", "foobar"]]
... ]
>>> bleu = datasets.load_metric("bleu")
>>> results = bleu.compute(predictions=predictions, references=references)
>>> print(results)
{'bleu': 0.6370964381207871, ...
```
**Describe the solution you'd like**
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
# and
>>> print(results)
{'bleu':1.0, ...
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4148/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4148/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/508
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/508/comments
|
https://api.github.com/repos/huggingface/datasets/issues/508/events
|
https://github.com/huggingface/datasets/issues/508
| 679,705,734
|
MDU6SXNzdWU2Nzk3MDU3MzQ=
| 508
|
TypeError: Receiver() takes no arguments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4",
"events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastiantomac/followers",
"following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebastiantomac",
"id": 1225851,
"login": "sebastiantomac",
"node_id": "MDQ6VXNlcjEyMjU4NTE=",
"organizations_url": "https://api.github.com/users/sebastiantomac/orgs",
"received_events_url": "https://api.github.com/users/sebastiantomac/received_events",
"repos_url": "https://api.github.com/users/sebastiantomac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebastiantomac"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here",
"Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.",
"Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix"
] | 2020-08-16T07:18:16Z
| 2020-09-01T14:53:33Z
| 2020-09-01T14:49:03Z
|
NONE
| null | null | null |
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/508/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/728/events
|
https://github.com/huggingface/datasets/issues/728
| 719,555,780
|
MDU6SXNzdWU3MTk1NTU3ODA=
| 728
|
Passing `cache_dir` to a metric does not work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-10-12T17:55:14Z
| 2020-10-29T09:34:42Z
| 2020-10-29T09:34:42Z
|
CONTRIBUTOR
| null | null | null |
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/728/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1706
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1706/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1706/events
|
https://github.com/huggingface/datasets/issues/1706
| 781,494,476
|
MDU6SXNzdWU3ODE0OTQ0NzY=
| 1,706
|
Error when downloading a large dataset on slow connection.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucadiliello",
"id": 23355969,
"login": "lucadiliello",
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucadiliello"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=\"force_redownload\")\r\n```"
] | 2021-01-07T17:48:15Z
| 2021-01-13T10:35:02Z
| null |
CONTRIBUTOR
| null | null | null |
I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): [4/28]
> File "<stdin>", line 1, in <module>
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/load.py", line 610, in load_dataset
> ignore_verifications=ignore_verifications,
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 515, in download_and_prepare
> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 570, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> File "/home/lucadiliello/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02/openwebtext.py", line 62, in _split_generators
> dl_dir = dl_manager.download_and_extract(_URL)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
> return self.extract(self.download(url_or_urls))
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 235, in extract
> num_proc=num_proc,
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
> return function(data_struct)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path
> tar_file.extractall(output_path_extracted)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2000, in extractall
> numeric_owner=numeric_owner)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2042, in extract
> numeric_owner=numeric_owner)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2112, in _extract_member
> self.makefile(tarinfo, targetpath)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2161, in makefile
> copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 253, in copyfileobj
> buf = src.read(remainder)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/lzma.py", line 200, in read
> return self._buffer.read(size)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 68, in readinto
> data = self.read(len(byte_view))
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 99, in read
> raise EOFError("Compressed file ended before the "
> EOFError: Compressed file ended before the end-of-stream marker was reached
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1706/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5367
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5367/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5367/events
|
https://github.com/huggingface/datasets/pull/5367
| 1,499,174,749
|
PR_kwDODunzps5FlevK
| 5,367
|
Fix remove columns from lazy dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-15T22:04:12Z
| 2022-12-15T22:27:53Z
| 2022-12-15T22:24:50Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"merged_at": "2022-12-15T22:24:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5367"
}
|
This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
```python
from datasets import *
ds = Dataset.from_dict({"a": range(5)})
def f(x):
x["b"] = x["a"]
return x
ds = ds.map(f, remove_columns=["a"])
assert ds.column_names == ["b"]
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5367/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1944
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1944/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1944/events
|
https://github.com/huggingface/datasets/pull/1944
| 816,267,216
|
MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3
| 1,944
|
Add Turkish News Category Dataset (270K - Lite Version)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I updated your suggestions. Thank you very much for your support. @lhoestq ",
"> Thanks for changing to ClassLabel :)\r\n> This is all good now !\r\n> \r\n> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?\r\n> To do so you can create another branch and another PR to only include the interpress_news_category_tr_lite files.\r\n> \r\n> Maybe this happened because of a git rebase ? Once you've already pushed your code, please use git merge instead of rebase in order to avoid this.\r\n\r\nThanks for the feedback.\r\nNew PR https://github.com/huggingface/datasets/pull/1967"
] | 2021-02-25T09:45:22Z
| 2021-03-02T17:46:41Z
| 2021-03-01T18:23:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1944",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1944"
}
|
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
@SBrandeis @lhoestq, can you please review this PR?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1944/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3961/events
|
https://github.com/huggingface/datasets/issues/3961
| 1,173,223,086
|
I_kwDODunzps5F7fau
| 3,961
|
Scores from Index at extra positions are not filtered out
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vishalsrao",
"id": 36671559,
"login": "vishalsrao",
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vishalsrao"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] | 2022-03-18T06:13:23Z
| 2022-04-12T14:41:58Z
| 2022-04-12T14:41:58Z
|
CONTRIBUTOR
| null | null | null |
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too.
Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3961/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3991/events
|
https://github.com/huggingface/datasets/issues/3991
| 1,177,362,901
|
I_kwDODunzps5GLSHV
| 3,991
|
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
open
| false
| null |
[] | null |
[] | 2022-03-22T22:16:05Z
| 2022-03-23T12:57:16Z
| null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)*
- **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.*
- **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)*
- **Motivation:** *Key dataset in the healthcare community*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
FYI @osanseviero @abidlabs
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3991/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5530/events
|
https://github.com/huggingface/datasets/pull/5530
| 1,582,938,241
|
PR_kwDODunzps5J4W_4
| 5,530
|
Add missing license in `NumpyFormatter`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008837 / 0.011353 (-0.002516) | 0.004608 / 0.011008 (-0.006400) | 0.101821 / 0.038508 (0.063312) | 0.030300 / 0.023109 (0.007191) | 0.301275 / 0.275898 (0.025377) | 0.365027 / 0.323480 (0.041547) | 0.007043 / 0.007986 (-0.000943) | 0.003493 / 0.004328 (-0.000835) | 0.078444 / 0.004250 (0.074194) | 0.036963 / 0.037052 (-0.000089) | 0.310510 / 0.258489 (0.052020) | 0.343769 / 0.293841 (0.049928) | 0.033560 / 0.128546 (-0.094986) | 0.011427 / 0.075646 (-0.064220) | 0.323542 / 0.419271 (-0.095730) | 0.043063 / 0.043533 (-0.000470) | 0.308869 / 0.255139 (0.053730) | 0.326436 / 0.283200 (0.043236) | 0.091775 / 0.141683 (-0.049908) | 1.471020 / 1.452155 (0.018865) | 1.494328 / 1.492716 (0.001612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009299 / 0.018006 (-0.008707) | 0.415705 / 0.000490 (0.415215) | 0.002406 / 0.000200 (0.002206) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022959 / 0.037411 (-0.014452) | 0.097111 / 0.014526 (0.082585) | 0.103399 / 0.176557 (-0.073157) | 0.144385 / 0.737135 (-0.592750) | 0.109069 / 0.296338 (-0.187269) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417796 / 0.215209 (0.202587) | 4.158198 / 2.077655 (2.080543) | 1.862036 / 1.504120 (0.357916) | 1.650130 / 1.541195 (0.108936) | 1.717150 / 1.468490 (0.248660) | 0.691704 / 4.584777 (-3.893073) | 3.328254 / 3.745712 (-0.417458) | 1.850070 / 5.269862 (-3.419792) | 1.154331 / 4.565676 (-3.411346) | 0.082199 / 0.424275 (-0.342076) | 0.012226 / 0.007607 (0.004619) | 0.522491 / 0.226044 (0.296446) | 5.244181 / 2.268929 (2.975253) | 2.286651 / 55.444624 (-53.157973) | 1.954439 / 6.876477 (-4.922038) | 1.992052 / 2.142072 (-0.150020) | 0.804779 / 4.805227 (-4.000449) | 0.147341 / 6.500664 (-6.353323) | 0.063863 / 0.075469 (-0.011606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270778 / 1.841788 (-0.571010) | 13.676378 / 8.074308 (5.602070) | 14.253498 / 10.191392 (4.062106) | 0.170748 / 0.680424 (-0.509676) | 0.028451 / 0.534201 (-0.505750) | 0.395034 / 0.579283 (-0.184249) | 0.407512 / 0.434364 (-0.026852) | 0.466740 / 0.540337 (-0.073598) | 0.564338 / 1.386936 (-0.822598) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006733 / 0.011353 (-0.004620) | 0.004635 / 0.011008 (-0.006373) | 0.075464 / 0.038508 (0.036956) | 0.027732 / 0.023109 (0.004623) | 0.343622 / 0.275898 (0.067724) | 0.380388 / 0.323480 (0.056908) | 0.005177 / 0.007986 (-0.002808) | 0.003435 / 0.004328 (-0.000893) | 0.074546 / 0.004250 (0.070296) | 0.039115 / 0.037052 (0.002063) | 0.342207 / 0.258489 (0.083718) | 0.390324 / 0.293841 (0.096483) | 0.031665 / 0.128546 (-0.096882) | 0.011695 / 0.075646 (-0.063951) | 0.085788 / 0.419271 (-0.333484) | 0.042423 / 0.043533 (-0.001110) | 0.340748 / 0.255139 (0.085609) | 0.372813 / 0.283200 (0.089614) | 0.092395 / 0.141683 (-0.049288) | 1.502158 / 1.452155 (0.050004) | 1.618233 / 1.492716 (0.125516) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224451 / 0.018006 (0.206444) | 0.398712 / 0.000490 (0.398222) | 0.002739 / 0.000200 (0.002539) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025393 / 0.037411 (-0.012018) | 0.100480 / 0.014526 (0.085954) | 0.106913 / 0.176557 (-0.069644) | 0.148639 / 0.737135 (-0.588496) | 0.110098 / 0.296338 (-0.186240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439359 / 0.215209 (0.224150) | 4.396801 / 2.077655 (2.319146) | 2.069809 / 1.504120 (0.565689) | 1.851014 / 1.541195 (0.309820) | 1.885003 / 1.468490 (0.416513) | 0.701387 / 4.584777 (-3.883390) | 3.404943 / 3.745712 (-0.340769) | 1.874506 / 5.269862 (-3.395355) | 1.174925 / 4.565676 (-3.390752) | 0.083282 / 0.424275 (-0.340993) | 0.012352 / 0.007607 (0.004745) | 0.543058 / 0.226044 (0.317013) | 5.458186 / 2.268929 (3.189258) | 2.562159 / 55.444624 (-52.882466) | 2.198810 / 6.876477 (-4.677667) | 2.238976 / 2.142072 (0.096903) | 0.810958 / 4.805227 (-3.994269) | 0.153341 / 6.500664 (-6.347323) | 0.067773 / 0.075469 (-0.007696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303938 / 1.841788 (-0.537850) | 14.170363 / 8.074308 (6.096055) | 13.727012 / 10.191392 (3.535620) | 0.129118 / 0.680424 (-0.551306) | 0.016746 / 0.534201 (-0.517455) | 0.382759 / 0.579283 (-0.196524) | 0.391070 / 0.434364 (-0.043294) | 0.461197 / 0.540337 (-0.079141) | 0.557641 / 1.386936 (-0.829295) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-13T19:33:23Z
| 2023-02-14T14:40:41Z
| 2023-02-14T12:23:58Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5530",
"merged_at": "2023-02-14T12:23:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5530"
}
|
## What's in this PR?
As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5530/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4824/events
|
https://github.com/huggingface/datasets/pull/4824
| 1,335,826,639
|
PR_kwDODunzps49BR5H
| 4,824
|
Fix titles in dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 2022-08-11T11:27:48Z
| 2022-08-11T13:46:11Z
| 2022-08-11T12:56:49Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"merged_at": "2022-08-11T12:56:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824"
}
|
Fix all the titles in the dataset cards, so that they conform to the required format.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4824/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6233
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6233/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6233/events
|
https://github.com/huggingface/datasets/pull/6233
| 1,891,804,286
|
PR_kwDODunzps5aF3kd
| 6,233
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NinoRisteski",
"id": 95188570,
"login": "NinoRisteski",
"node_id": "U_kgDOBax2Wg",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NinoRisteski"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008370 / 0.011353 (-0.002983) | 0.004674 / 0.011008 (-0.006334) | 0.103912 / 0.038508 (0.065404) | 0.101668 / 0.023109 (0.078559) | 0.417945 / 0.275898 (0.142047) | 0.454805 / 0.323480 (0.131325) | 0.004763 / 0.007986 (-0.003223) | 0.003934 / 0.004328 (-0.000394) | 0.078446 / 0.004250 (0.074196) | 0.068383 / 0.037052 (0.031331) | 0.415100 / 0.258489 (0.156611) | 0.475272 / 0.293841 (0.181431) | 0.036884 / 0.128546 (-0.091662) | 0.010097 / 0.075646 (-0.065549) | 0.354962 / 0.419271 (-0.064309) | 0.062688 / 0.043533 (0.019155) | 0.420643 / 0.255139 (0.165504) | 0.446504 / 0.283200 (0.163304) | 0.029075 / 0.141683 (-0.112608) | 1.791517 / 1.452155 (0.339363) | 1.859820 / 1.492716 (0.367104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246929 / 0.018006 (0.228923) | 0.519593 / 0.000490 (0.519103) | 0.006848 / 0.000200 (0.006648) | 0.000168 / 0.000054 (0.000114) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.115582 / 0.014526 (0.101057) | 0.128235 / 0.176557 (-0.048321) | 0.187123 / 0.737135 (-0.550012) | 0.120862 / 0.296338 (-0.175477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463406 / 0.215209 (0.248197) | 4.615517 / 2.077655 (2.537863) | 2.250513 / 1.504120 (0.746393) | 2.061226 / 1.541195 (0.520032) | 2.189938 / 1.468490 (0.721448) | 0.582984 / 4.584777 (-4.001793) | 4.299464 / 3.745712 (0.553751) | 4.037274 / 5.269862 (-1.232588) | 2.608967 / 4.565676 (-1.956710) | 0.068944 / 0.424275 (-0.355331) | 0.009501 / 0.007607 (0.001894) | 0.567436 / 0.226044 (0.341392) | 5.662738 / 2.268929 (3.393809) | 2.849094 / 55.444624 (-52.595530) | 2.461013 / 6.876477 (-4.415464) | 2.663245 / 2.142072 (0.521172) | 0.704528 / 4.805227 (-4.100699) | 0.163583 / 6.500664 (-6.337081) | 0.075719 / 0.075469 (0.000250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604743 / 1.841788 (-0.237044) | 24.512054 / 8.074308 (16.437746) | 17.870939 / 10.191392 (7.679547) | 0.199188 / 0.680424 (-0.481236) | 0.023820 / 0.534201 (-0.510381) | 0.487520 / 0.579283 (-0.091763) | 0.512543 / 0.434364 (0.078179) | 0.575138 / 0.540337 (0.034801) | 0.759863 / 1.386936 (-0.627073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010516 / 0.011353 (-0.000837) | 0.004779 / 0.011008 (-0.006229) | 0.078482 / 0.038508 (0.039974) | 0.108533 / 0.023109 (0.085424) | 0.498692 / 0.275898 (0.222794) | 0.534698 / 0.323480 (0.211218) | 0.007624 / 0.007986 (-0.000362) | 0.003938 / 0.004328 (-0.000391) | 0.077317 / 0.004250 (0.073067) | 0.078056 / 0.037052 (0.041004) | 0.493648 / 0.258489 (0.235159) | 0.540891 / 0.293841 (0.247050) | 0.040377 / 0.128546 (-0.088169) | 0.010155 / 0.075646 (-0.065491) | 0.084384 / 0.419271 (-0.334888) | 0.061419 / 0.043533 (0.017886) | 0.494474 / 0.255139 (0.239335) | 0.524656 / 0.283200 (0.241456) | 0.029052 / 0.141683 (-0.112631) | 1.794584 / 1.452155 (0.342429) | 1.939987 / 1.492716 (0.447270) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.377404 / 0.018006 (0.359398) | 0.516562 / 0.000490 (0.516072) | 0.109555 / 0.000200 (0.109356) | 0.001126 / 0.000054 (0.001071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039793 / 0.037411 (0.002382) | 0.123001 / 0.014526 (0.108475) | 0.127536 / 0.176557 (-0.049021) | 0.191681 / 0.737135 (-0.545455) | 0.128590 / 0.296338 (-0.167748) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513689 / 0.215209 (0.298480) | 5.135114 / 2.077655 (3.057459) | 2.797885 / 1.504120 (1.293765) | 2.715332 / 1.541195 (1.174137) | 2.746437 / 1.468490 (1.277947) | 0.596480 / 4.584777 (-3.988297) | 4.382013 / 3.745712 (0.636301) | 3.965956 / 5.269862 (-1.303906) | 2.545206 / 4.565676 (-2.020471) | 0.069620 / 0.424275 (-0.354655) | 0.009321 / 0.007607 (0.001714) | 0.612424 / 0.226044 (0.386379) | 6.107037 / 2.268929 (3.838109) | 3.447246 / 55.444624 (-51.997379) | 3.073262 / 6.876477 (-3.803215) | 3.280185 / 2.142072 (1.138113) | 0.704776 / 4.805227 (-4.100451) | 0.160488 / 6.500664 (-6.340176) | 0.075730 / 0.075469 (0.000261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.697035 / 1.841788 (-0.144753) | 24.766118 / 8.074308 (16.691809) | 18.476699 / 10.191392 (8.285307) | 0.176594 / 0.680424 (-0.503830) | 0.024249 / 0.534201 (-0.509952) | 0.478743 / 0.579283 (-0.100541) | 0.518774 / 0.434364 (0.084410) | 0.581498 / 0.540337 (0.041161) | 0.797784 / 1.386936 (-0.589152) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-12T06:53:06Z
| 2023-09-13T18:20:50Z
| 2023-09-13T18:10:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6233",
"merged_at": "2023-09-13T18:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6233"
}
|
fixed a typo
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6233/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2343
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2343/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2343/events
|
https://github.com/huggingface/datasets/issues/2343
| 883,208,539
|
MDU6SXNzdWU4ODMyMDg1Mzk=
| 2,343
|
Columns are removed before or after map function applied?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"events_url": "https://api.github.com/users/taghizad3h/events{/privacy}",
"followers_url": "https://api.github.com/users/taghizad3h/followers",
"following_url": "https://api.github.com/users/taghizad3h/following{/other_user}",
"gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/taghizad3h",
"id": 8199406,
"login": "taghizad3h",
"node_id": "MDQ6VXNlcjgxOTk0MDY=",
"organizations_url": "https://api.github.com/users/taghizad3h/orgs",
"received_events_url": "https://api.github.com/users/taghizad3h/received_events",
"repos_url": "https://api.github.com/users/taghizad3h/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions",
"type": "User",
"url": "https://api.github.com/users/taghizad3h"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi! Columns are removed **after** applying the function and **before** updating the examples with the function's output (as per the docs [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map.remove_columns)). I agree the docs on this should be more clear."
] | 2021-05-10T02:36:20Z
| 2022-10-24T11:31:55Z
| null |
NONE
| null | null | null |
## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2343/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2752
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2752/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2752/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2752/events
|
https://github.com/huggingface/datasets/pull/2752
| 959,023,608
|
MDExOlB1bGxSZXF1ZXN0NzAyMjAxMjAy
| 2,752
|
Generate metadata JSON for lm1b dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-03T11:34:56Z
| 2021-08-04T06:40:40Z
| 2021-08-04T06:40:39Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2752",
"merged_at": "2021-08-04T06:40:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2752"
}
|
Related to #2743.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2752/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2752/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6202
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6202/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6202/events
|
https://github.com/huggingface/datasets/issues/6202
| 1,876,630,351
|
I_kwDODunzps5v2xtP
| 6,202
|
avoid downgrading jax version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"events_url": "https://api.github.com/users/chrisflesher/events{/privacy}",
"followers_url": "https://api.github.com/users/chrisflesher/followers",
"following_url": "https://api.github.com/users/chrisflesher/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisflesher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chrisflesher",
"id": 1332458,
"login": "chrisflesher",
"node_id": "MDQ6VXNlcjEzMzI0NTg=",
"organizations_url": "https://api.github.com/users/chrisflesher/orgs",
"received_events_url": "https://api.github.com/users/chrisflesher/received_events",
"repos_url": "https://api.github.com/users/chrisflesher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chrisflesher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisflesher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chrisflesher"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"https://github.com/huggingface/datasets/blob/main/setup.py#L236\r\nCurrently has the highest version at 0.3.25; Not sure if there is any reason for this, other than that was the tested version?"
] | 2023-09-01T02:57:57Z
| 2023-10-12T16:28:59Z
| 2023-10-12T16:28:59Z
|
NONE
| null | null | null |
### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I would be willing to beta test. Or maybe write some code if I could get pointed in the right direction, I'm not super familiar with this codebase.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6202/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5328
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5328/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5328/events
|
https://github.com/huggingface/datasets/pull/5328
| 1,471,661,437
|
PR_kwDODunzps5EFAyT
| 5,328
|
Fix docs building for main
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: https://github.com/huggingface/datasets/actions/runs/3603370082/jobs/6071482470"
] | 2022-12-01T17:07:45Z
| 2022-12-02T16:29:00Z
| 2022-12-02T16:26:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"merged_at": "2022-12-02T16:26:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5328"
}
|
This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5328/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4470
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4470/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4470/events
|
https://github.com/huggingface/datasets/pull/4470
| 1,267,470,051
|
PR_kwDODunzps45dnYw
| 4,470
|
Reorder returned validation/test splits in script template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T12:21:13Z
| 2022-06-10T18:04:10Z
| 2022-06-10T17:54:50Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4470",
"merged_at": "2022-06-10T17:54:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4470"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4470/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1810
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1810/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1810/events
|
https://github.com/huggingface/datasets/issues/1810
| 799,168,650
|
MDU6SXNzdWU3OTkxNjg2NTA=
| 1,810
|
Add Hateful Memes Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
open
| false
| null |
[] | null |
[
"I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?",
"Also, I found the information for loading only subsets of the data [here](https://github.com/huggingface/datasets/blob/master/docs/source/splits.rst).",
"Hi @lhoestq,\r\n\r\nRequest you to check this once.\r\n\r\nThanks,\r\nGunjan",
"Hi @gchhablani since Array2D doesn't support images of different sizes, I would suggest to store in the dataset the paths to the image file instead of the image data. This has the advantage of not decompressing the data (images are often compressed using jpeg, png etc.). Users can still apply `.map` to load the images if they want to. Though it would en up being Sequences features.\r\n\r\nIn the future we'll add support for ragged tensors for this case and update the relevant dataset with this feature."
] | 2021-02-02T10:53:59Z
| 2021-12-08T12:03:59Z
| null |
CONTRIBUTOR
| null | null | null |
## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [This link](https://drivendata-competition-fb-hateful-memes-data.s3.amazonaws.com/XjiOc5ycDBRRNwbhRlgH.zip?AWSAccessKeyId=AKIARVBOBDCY4MWEDJKS&Signature=DaUuGgZWUgDHzEPPbyJ2PhSJ56Q%3D&Expires=1612816874)
- **Motivation:** Including multi-modal datasets to 🤗 datasets.
I will be adding this dataset. It requires the user to sign an agreement on DrivenData. So, it will be used with a manual download.
The issue with this dataset is that the images are of different sizes. The image datasets added so far (CIFAR-10 and MNIST) have a uniform shape throughout.
So something like
```python
datasets.Array2D(shape=(28, 28), dtype="uint8")
```
won't work for the images. How would I add image features then? I checked `datasets/features.py` but couldn't figure out the appropriate class for this. I'm assuming I would want to avoid re-sizing at all since we want the user to be able to access the original images.
Also, in case I want to load only a subset of the data, since the actual data is around 8.8GB, how would that be possible?
Thanks,
Gunjan
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1810/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5681/events
|
https://github.com/huggingface/datasets/issues/5681
| 1,645,630,784
|
I_kwDODunzps5iFlVA
| 5,681
|
Add information about patterns search order to the doc about structuring repo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
] | null |
[
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] | 2023-03-29T11:44:49Z
| 2023-04-03T18:31:11Z
| 2023-04-03T18:31:11Z
|
CONTRIBUTOR
| null | null | null |
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember....
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6359
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6359/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6359/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6359/events
|
https://github.com/huggingface/datasets/issues/6359
| 1,965,378,583
|
I_kwDODunzps51JUwX
| 6,359
|
Stuck in "Resolving data files..."
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn"
}
|
[] |
open
| false
| null |
[] | null |
[
"Most likely, the data file inference logic is the problem here.\r\n\r\nYou can run the following code to verify this:\r\n```python\r\nimport time\r\nfrom datasets.data_files import get_data_patterns\r\nstart_time = time.time()\r\nget_data_patterns(\"/path/to/img_dir\")\r\nend_time = time.time()\r\nprint(f\"Elapsed time: {end_time - start_time:.2f}s\")\r\n```\r\n \r\nWe plan to optimize this for the next version (or version after that). In the meantime, specifying the split patterns manually should give better performance:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_files={\"train\": \"path/to/img_dir/train/**\", ...}, split=\"train\")\r\n```",
"Hi, @mariosasko, you are right; data file inference logic is extremely slow.\r\n\r\nI have done a similar test, that is I modify the source code of datasets/load.py to measure the cost of two suspicious operations:\r\n```python\r\ndef get_module(self) -> DatasetModule:\r\n base_path = Path(self.data_dir or \"\").expanduser().resolve().as_posix()\r\n start = time.time()\r\n patterns = sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns(base_path)\r\n print(f\"patterns: {time.time() - start}\")\r\n start = time.time()\r\n data_files = DataFilesDict.from_patterns(\r\n patterns,\r\n download_config=self.download_config,\r\n base_path=base_path,\r\n )\r\n print(f\"data_files: {time.time() - start}\")\r\n```\r\nIt gaves:\r\npatterns: 3062.2050700187683\r\ndata_files: 413.9576675891876\r\n\r\nThus, these two operations contribute to almost all of load time. What's going on in them?",
"Furthermore, what's my current workaround about this problem? Should I save it by `save_to_disk()` and load dataset through `load_from_disk`?"
] | 2023-10-27T12:01:51Z
| 2023-10-28T01:38:21Z
| null |
NONE
| null | null | null |
### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understand, after Arrow files been created in the first run, the second run should not take time longer than one or two minutes.
### Steps to reproduce the bug
# Run following code two times
dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')
### Expected behavior
Fast dataset building
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6359/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6359/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2055
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2055/events
|
https://github.com/huggingface/datasets/issues/2055
| 831,684,312
|
MDU6SXNzdWU4MzE2ODQzMTI=
| 2,055
|
is there a way to override a dataset object saved with save_to_disk?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ",
"I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.",
"Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?"
] | 2021-03-15T10:50:53Z
| 2021-03-22T04:06:17Z
| 2021-03-22T04:06:17Z
|
NONE
| null | null | null |
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/88
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/88/comments
|
https://api.github.com/repos/huggingface/datasets/issues/88/events
|
https://github.com/huggingface/datasets/pull/88
| 617,284,664
|
MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw
| 88
|
Add wiki40b
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] | 2020-05-13T09:16:01Z
| 2020-05-13T12:31:55Z
| 2020-05-13T12:31:54Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"merged_at": "2020-05-13T12:31:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88"
}
|
This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/88/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4194
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4194/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4194/events
|
https://github.com/huggingface/datasets/pull/4194
| 1,210,958,602
|
PR_kwDODunzps42jjD3
| 4,194
|
Support lists of multi-dimensional numpy arrays
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-21T12:22:26Z
| 2022-05-12T15:16:34Z
| 2022-05-12T15:08:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"merged_at": "2022-05-12T15:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194"
}
|
Fix #4191.
CC: @SaulLu
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4194/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5744/events
|
https://github.com/huggingface/datasets/issues/5744
| 1,667,076,620
|
I_kwDODunzps5jXZIM
| 5,744
|
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4",
"events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}",
"followers_url": "https://api.github.com/users/keyboardAnt/followers",
"following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}",
"gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keyboardAnt",
"id": 15572698,
"login": "keyboardAnt",
"node_id": "MDQ6VXNlcjE1NTcyNjk4",
"organizations_url": "https://api.github.com/users/keyboardAnt/orgs",
"received_events_url": "https://api.github.com/users/keyboardAnt/received_events",
"repos_url": "https://api.github.com/users/keyboardAnt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keyboardAnt"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?",
"This has been fixed in `datasets` 2.11"
] | 2023-04-13T20:21:28Z
| 2023-07-06T17:01:59Z
| 2023-07-06T17:01:59Z
|
NONE
| null | null | null |
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`.
For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745
---
* The FutureWarning mentioned above:
```
FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
```
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5744/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4431
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4431/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4431/events
|
https://github.com/huggingface/datasets/pull/4431
| 1,254,618,948
|
PR_kwDODunzps44x5aG
| 4,431
|
Add personaldialog datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/silverriver",
"id": 2529049,
"login": "silverriver",
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"repos_url": "https://api.github.com/users/silverriver/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"type": "User",
"url": "https://api.github.com/users/silverriver"
}
|
[] |
closed
| false
| null |
[] | null |
[
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well"
] | 2022-06-01T01:20:40Z
| 2022-06-11T12:40:23Z
| 2022-06-11T12:31:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431"
}
|
It seems that all tests are passed
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4431/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/756/events
|
https://github.com/huggingface/datasets/pull/756
| 728,211,373
|
MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3
| 756
|
Start community-provided dataset docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Oh, really cool @sshleifer!"
] | 2020-10-23T13:17:41Z
| 2020-10-26T12:55:20Z
| 2020-10-26T12:55:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/756.diff",
"html_url": "https://github.com/huggingface/datasets/pull/756",
"merged_at": "2020-10-26T12:55:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/756.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/756"
}
|
Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.
I think the first naming is clearer, but I didn't address that here.
I didn't add metadata, will try that.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/756/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2815
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2815/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2815/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2815/events
|
https://github.com/huggingface/datasets/pull/2815
| 973,862,024
|
MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5
| 2,815
|
Tiny typo fixes of "fo" -> "of"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9934829?v=4",
"events_url": "https://api.github.com/users/aronszanto/events{/privacy}",
"followers_url": "https://api.github.com/users/aronszanto/followers",
"following_url": "https://api.github.com/users/aronszanto/following{/other_user}",
"gists_url": "https://api.github.com/users/aronszanto/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aronszanto",
"id": 9934829,
"login": "aronszanto",
"node_id": "MDQ6VXNlcjk5MzQ4Mjk=",
"organizations_url": "https://api.github.com/users/aronszanto/orgs",
"received_events_url": "https://api.github.com/users/aronszanto/received_events",
"repos_url": "https://api.github.com/users/aronszanto/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aronszanto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronszanto/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aronszanto"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-18T16:36:11Z
| 2021-08-19T08:03:02Z
| 2021-08-19T08:03:02Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"merged_at": "2021-08-19T08:03:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2815"
}
|
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2815/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2815/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4644
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4644/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4644/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4644/events
|
https://github.com/huggingface/datasets/pull/4644
| 1,296,018,052
|
PR_kwDODunzps468mQb
| 4,644
|
[Minor fix] Typo correction
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-06T15:37:02Z
| 2022-07-06T15:56:32Z
| 2022-07-06T15:45:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4644",
"merged_at": "2022-07-06T15:45:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4644"
}
|
recieve -> receive
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4644/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4644/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1393
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1393/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1393/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1393/events
|
https://github.com/huggingface/datasets/pull/1393
| 760,436,267
|
MDExOlB1bGxSZXF1ZXN0NTM1MjY4MjUx
| 1,393
|
Add script_version suggestion when dataset/metric not found
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-09T15:37:38Z
| 2020-12-10T18:17:05Z
| 2020-12-10T18:17:05Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1393",
"merged_at": "2020-12-10T18:17:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1393"
}
|
Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met
rics/blah/blah.py.
If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1393/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1393/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6426
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6426/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6426/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6426/events
|
https://github.com/huggingface/datasets/pull/6426
| 1,995,363,264
|
PR_kwDODunzps5fjOEK
| 6,426
|
More robust temporary directory deletion
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6426). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004750 / 0.011353 (-0.006603) | 0.002928 / 0.011008 (-0.008080) | 0.061962 / 0.038508 (0.023454) | 0.029878 / 0.023109 (0.006768) | 0.233380 / 0.275898 (-0.042518) | 0.262221 / 0.323480 (-0.061259) | 0.002982 / 0.007986 (-0.005004) | 0.003698 / 0.004328 (-0.000630) | 0.048565 / 0.004250 (0.044314) | 0.046107 / 0.037052 (0.009055) | 0.240090 / 0.258489 (-0.018399) | 0.267294 / 0.293841 (-0.026547) | 0.023335 / 0.128546 (-0.105211) | 0.007221 / 0.075646 (-0.068425) | 0.200903 / 0.419271 (-0.218369) | 0.059237 / 0.043533 (0.015705) | 0.234929 / 0.255139 (-0.020210) | 0.256326 / 0.283200 (-0.026874) | 0.018549 / 0.141683 (-0.123134) | 1.103519 / 1.452155 (-0.348635) | 1.156573 / 1.492716 (-0.336143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091205 / 0.018006 (0.073199) | 0.303533 / 0.000490 (0.303043) | 0.000204 / 0.000200 (0.000004) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018572 / 0.037411 (-0.018839) | 0.062323 / 0.014526 (0.047797) | 0.074528 / 0.176557 (-0.102029) | 0.120295 / 0.737135 (-0.616841) | 0.076786 / 0.296338 (-0.219552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278814 / 0.215209 (0.063605) | 2.745483 / 2.077655 (0.667829) | 1.486073 / 1.504120 (-0.018047) | 1.385334 / 1.541195 (-0.155861) | 1.386351 / 1.468490 (-0.082139) | 0.395545 / 4.584777 (-4.189232) | 2.409468 / 3.745712 (-1.336244) | 2.670702 / 5.269862 (-2.599159) | 1.629245 / 4.565676 (-2.936432) | 0.045990 / 0.424275 (-0.378286) | 0.004782 / 0.007607 (-0.002825) | 0.332912 / 0.226044 (0.106867) | 3.249277 / 2.268929 (0.980349) | 1.888690 / 55.444624 (-53.555934) | 1.533462 / 6.876477 (-5.343015) | 1.576045 / 2.142072 (-0.566027) | 0.473090 / 4.805227 (-4.332138) | 0.099448 / 6.500664 (-6.401216) | 0.042613 / 0.075469 (-0.032857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.944229 / 1.841788 (-0.897559) | 12.103621 / 8.074308 (4.029313) | 10.643471 / 10.191392 (0.452079) | 0.143004 / 0.680424 (-0.537420) | 0.013872 / 0.534201 (-0.520329) | 0.272026 / 0.579283 (-0.307257) | 0.298701 / 0.434364 (-0.135663) | 0.310299 / 0.540337 (-0.230038) | 0.420934 / 1.386936 (-0.966002) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004904 / 0.011353 (-0.006449) | 0.003064 / 0.011008 (-0.007945) | 0.047982 / 0.038508 (0.009474) | 0.056354 / 0.023109 (0.033245) | 0.292893 / 0.275898 (0.016995) | 0.348744 / 0.323480 (0.025264) | 0.003988 / 0.007986 (-0.003997) | 0.002431 / 0.004328 (-0.001898) | 0.049108 / 0.004250 (0.044857) | 0.039055 / 0.037052 (0.002002) | 0.278129 / 0.258489 (0.019640) | 0.318547 / 0.293841 (0.024706) | 0.025040 / 0.128546 (-0.103507) | 0.007166 / 0.075646 (-0.068480) | 0.053967 / 0.419271 (-0.365305) | 0.033128 / 0.043533 (-0.010405) | 0.272849 / 0.255139 (0.017710) | 0.312143 / 0.283200 (0.028943) | 0.017942 / 0.141683 (-0.123741) | 1.192297 / 1.452155 (-0.259857) | 1.328102 / 1.492716 (-0.164615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090903 / 0.018006 (0.072896) | 0.301260 / 0.000490 (0.300770) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021112 / 0.037411 (-0.016300) | 0.070181 / 0.014526 (0.055656) | 0.082431 / 0.176557 (-0.094126) | 0.121973 / 0.737135 (-0.615163) | 0.083617 / 0.296338 (-0.212721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289587 / 0.215209 (0.074378) | 2.877895 / 2.077655 (0.800240) | 1.721417 / 1.504120 (0.217297) | 1.536023 / 1.541195 (-0.005171) | 1.550917 / 1.468490 (0.082427) | 0.402978 / 4.584777 (-4.181799) | 2.431767 / 3.745712 (-1.313946) | 2.544419 / 5.269862 (-2.725442) | 1.554562 / 4.565676 (-3.011115) | 0.046260 / 0.424275 (-0.378015) | 0.004923 / 0.007607 (-0.002684) | 0.341584 / 0.226044 (0.115540) | 3.362133 / 2.268929 (1.093205) | 1.928741 / 55.444624 (-53.515884) | 1.654798 / 6.876477 (-5.221679) | 1.715111 / 2.142072 (-0.426962) | 0.471029 / 4.805227 (-4.334198) | 0.098912 / 6.500664 (-6.401752) | 0.041018 / 0.075469 (-0.034451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992880 / 1.841788 (-0.848907) | 12.083890 / 8.074308 (4.009582) | 11.023833 / 10.191392 (0.832441) | 0.139217 / 0.680424 (-0.541207) | 0.015183 / 0.534201 (-0.519018) | 0.271637 / 0.579283 (-0.307646) | 0.278910 / 0.434364 (-0.155454) | 0.306891 / 0.540337 (-0.233447) | 0.424412 / 1.386936 (-0.962524) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004545 / 0.011353 (-0.006808) | 0.002955 / 0.011008 (-0.008054) | 0.062119 / 0.038508 (0.023611) | 0.029357 / 0.023109 (0.006248) | 0.240068 / 0.275898 (-0.035830) | 0.273376 / 0.323480 (-0.050104) | 0.003884 / 0.007986 (-0.004102) | 0.002390 / 0.004328 (-0.001938) | 0.048621 / 0.004250 (0.044371) | 0.043867 / 0.037052 (0.006815) | 0.247240 / 0.258489 (-0.011249) | 0.279187 / 0.293841 (-0.014654) | 0.023377 / 0.128546 (-0.105169) | 0.007261 / 0.075646 (-0.068385) | 0.201913 / 0.419271 (-0.217359) | 0.057063 / 0.043533 (0.013530) | 0.245698 / 0.255139 (-0.009441) | 0.265644 / 0.283200 (-0.017556) | 0.018077 / 0.141683 (-0.123606) | 1.133225 / 1.452155 (-0.318930) | 1.186380 / 1.492716 (-0.306336) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089639 / 0.018006 (0.071632) | 0.298918 / 0.000490 (0.298428) | 0.000198 / 0.000200 (-0.000002) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019037 / 0.037411 (-0.018374) | 0.062580 / 0.014526 (0.048055) | 0.072974 / 0.176557 (-0.103582) | 0.119909 / 0.737135 (-0.617226) | 0.075021 / 0.296338 (-0.221317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276561 / 0.215209 (0.061352) | 2.697281 / 2.077655 (0.619626) | 1.419772 / 1.504120 (-0.084348) | 1.302079 / 1.541195 (-0.239115) | 1.329143 / 1.468490 (-0.139347) | 0.395528 / 4.584777 (-4.189249) | 2.365788 / 3.745712 (-1.379925) | 2.583802 / 5.269862 (-2.686059) | 1.561983 / 4.565676 (-3.003694) | 0.045269 / 0.424275 (-0.379006) | 0.004826 / 0.007607 (-0.002781) | 0.331041 / 0.226044 (0.104996) | 3.292523 / 2.268929 (1.023595) | 1.797865 / 55.444624 (-53.646759) | 1.509229 / 6.876477 (-5.367248) | 1.498884 / 2.142072 (-0.643188) | 0.458518 / 4.805227 (-4.346709) | 0.098076 / 6.500664 (-6.402588) | 0.042290 / 0.075469 (-0.033179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.922331 / 1.841788 (-0.919457) | 11.605041 / 8.074308 (3.530732) | 10.471664 / 10.191392 (0.280272) | 0.130325 / 0.680424 (-0.550098) | 0.014084 / 0.534201 (-0.520117) | 0.278877 / 0.579283 (-0.300406) | 0.263104 / 0.434364 (-0.171259) | 0.306723 / 0.540337 (-0.233615) | 0.416238 / 1.386936 (-0.970698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005094 / 0.011353 (-0.006259) | 0.002794 / 0.011008 (-0.008214) | 0.048189 / 0.038508 (0.009680) | 0.050409 / 0.023109 (0.027300) | 0.272618 / 0.275898 (-0.003280) | 0.293589 / 0.323480 (-0.029891) | 0.003995 / 0.007986 (-0.003991) | 0.002373 / 0.004328 (-0.001956) | 0.048269 / 0.004250 (0.044018) | 0.038751 / 0.037052 (0.001698) | 0.273495 / 0.258489 (0.015006) | 0.309244 / 0.293841 (0.015403) | 0.024681 / 0.128546 (-0.103866) | 0.007390 / 0.075646 (-0.068256) | 0.053844 / 0.419271 (-0.365427) | 0.032395 / 0.043533 (-0.011137) | 0.271963 / 0.255139 (0.016824) | 0.289557 / 0.283200 (0.006357) | 0.018659 / 0.141683 (-0.123024) | 1.154478 / 1.452155 (-0.297676) | 1.199772 / 1.492716 (-0.292944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089771 / 0.018006 (0.071764) | 0.299468 / 0.000490 (0.298978) | 0.000219 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021854 / 0.037411 (-0.015558) | 0.070280 / 0.014526 (0.055754) | 0.080956 / 0.176557 (-0.095600) | 0.119430 / 0.737135 (-0.617705) | 0.082778 / 0.296338 (-0.213561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304273 / 0.215209 (0.089064) | 2.968264 / 2.077655 (0.890609) | 1.592363 / 1.504120 (0.088243) | 1.460795 / 1.541195 (-0.080400) | 1.501545 / 1.468490 (0.033055) | 0.411001 / 4.584777 (-4.173776) | 2.464273 / 3.745712 (-1.281439) | 2.524585 / 5.269862 (-2.745277) | 1.537443 / 4.565676 (-3.028234) | 0.046163 / 0.424275 (-0.378112) | 0.004783 / 0.007607 (-0.002824) | 0.354251 / 0.226044 (0.128206) | 3.512087 / 2.268929 (1.243158) | 1.968156 / 55.444624 (-53.476468) | 1.664966 / 6.876477 (-5.211510) | 1.685013 / 2.142072 (-0.457060) | 0.485793 / 4.805227 (-4.319435) | 0.099789 / 6.500664 (-6.400875) | 0.040705 / 0.075469 (-0.034764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966570 / 1.841788 (-0.875218) | 12.023188 / 8.074308 (3.948880) | 11.122602 / 10.191392 (0.931210) | 0.141002 / 0.680424 (-0.539422) | 0.015955 / 0.534201 (-0.518246) | 0.270293 / 0.579283 (-0.308990) | 0.281839 / 0.434364 (-0.152525) | 0.307279 / 0.540337 (-0.233058) | 0.434687 / 1.386936 (-0.952249) |\n\n</details>\n</details>\n\n\n",
"What would be the impact for non-windows users ?\r\n\r\nAlso I wonder if a gc.collect() after the `del` could help to remove the PermissionError ? Or register the dataset for deletion on copy/pickle maybe ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004973 / 0.011353 (-0.006380) | 0.002753 / 0.011008 (-0.008256) | 0.061489 / 0.038508 (0.022981) | 0.051122 / 0.023109 (0.028012) | 0.228783 / 0.275898 (-0.047115) | 0.256982 / 0.323480 (-0.066498) | 0.002873 / 0.007986 (-0.005112) | 0.003544 / 0.004328 (-0.000784) | 0.048721 / 0.004250 (0.044471) | 0.039137 / 0.037052 (0.002085) | 0.244988 / 0.258489 (-0.013501) | 0.275230 / 0.293841 (-0.018611) | 0.023034 / 0.128546 (-0.105513) | 0.006988 / 0.075646 (-0.068658) | 0.202780 / 0.419271 (-0.216492) | 0.035325 / 0.043533 (-0.008207) | 0.241722 / 0.255139 (-0.013417) | 0.259671 / 0.283200 (-0.023528) | 0.019875 / 0.141683 (-0.121808) | 1.098667 / 1.452155 (-0.353488) | 1.161444 / 1.492716 (-0.331272) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093591 / 0.018006 (0.075585) | 0.298703 / 0.000490 (0.298213) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018319 / 0.037411 (-0.019092) | 0.062993 / 0.014526 (0.048467) | 0.074313 / 0.176557 (-0.102244) | 0.123089 / 0.737135 (-0.614046) | 0.075177 / 0.296338 (-0.221162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.268584 / 0.215209 (0.053375) | 2.633116 / 2.077655 (0.555461) | 1.390743 / 1.504120 (-0.113377) | 1.277385 / 1.541195 (-0.263810) | 1.287934 / 1.468490 (-0.180556) | 0.387934 / 4.584777 (-4.196843) | 2.345819 / 3.745712 (-1.399893) | 2.558169 / 5.269862 (-2.711693) | 1.569812 / 4.565676 (-2.995865) | 0.045297 / 0.424275 (-0.378978) | 0.005238 / 0.007607 (-0.002369) | 0.359704 / 0.226044 (0.133659) | 3.204688 / 2.268929 (0.935759) | 1.753321 / 55.444624 (-53.691303) | 1.492223 / 6.876477 (-5.384254) | 1.498207 / 2.142072 (-0.643865) | 0.459830 / 4.805227 (-4.345397) | 0.098194 / 6.500664 (-6.402470) | 0.042632 / 0.075469 (-0.032837) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963020 / 1.841788 (-0.878768) | 11.500470 / 8.074308 (3.426161) | 10.451882 / 10.191392 (0.260490) | 0.127706 / 0.680424 (-0.552718) | 0.014084 / 0.534201 (-0.520117) | 0.269728 / 0.579283 (-0.309555) | 0.260283 / 0.434364 (-0.174080) | 0.303717 / 0.540337 (-0.236620) | 0.397028 / 1.386936 (-0.989908) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004823 / 0.011353 (-0.006529) | 0.002751 / 0.011008 (-0.008257) | 0.048719 / 0.038508 (0.010211) | 0.051409 / 0.023109 (0.028300) | 0.267139 / 0.275898 (-0.008759) | 0.287659 / 0.323480 (-0.035821) | 0.003959 / 0.007986 (-0.004027) | 0.002376 / 0.004328 (-0.001953) | 0.047942 / 0.004250 (0.043692) | 0.039742 / 0.037052 (0.002690) | 0.268348 / 0.258489 (0.009859) | 0.297201 / 0.293841 (0.003360) | 0.024226 / 0.128546 (-0.104320) | 0.007103 / 0.075646 (-0.068544) | 0.053310 / 0.419271 (-0.365961) | 0.032716 / 0.043533 (-0.010816) | 0.269469 / 0.255139 (0.014330) | 0.287752 / 0.283200 (0.004553) | 0.018191 / 0.141683 (-0.123492) | 1.114086 / 1.452155 (-0.338069) | 1.188054 / 1.492716 (-0.304662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091072 / 0.018006 (0.073066) | 0.300367 / 0.000490 (0.299877) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020970 / 0.037411 (-0.016441) | 0.070356 / 0.014526 (0.055830) | 0.081339 / 0.176557 (-0.095218) | 0.120741 / 0.737135 (-0.616394) | 0.081677 / 0.296338 (-0.214662) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290405 / 0.215209 (0.075196) | 2.863877 / 2.077655 (0.786222) | 1.524603 / 1.504120 (0.020483) | 1.397917 / 1.541195 (-0.143278) | 1.402635 / 1.468490 (-0.065855) | 0.405525 / 4.584777 (-4.179252) | 2.432474 / 3.745712 (-1.313239) | 2.446277 / 5.269862 (-2.823585) | 1.550300 / 4.565676 (-3.015377) | 0.046545 / 0.424275 (-0.377730) | 0.004824 / 0.007607 (-0.002783) | 0.343578 / 0.226044 (0.117534) | 3.436850 / 2.268929 (1.167922) | 1.897200 / 55.444624 (-53.547425) | 1.625222 / 6.876477 (-5.251255) | 1.730488 / 2.142072 (-0.411585) | 0.482099 / 4.805227 (-4.323129) | 0.097828 / 6.500664 (-6.402836) | 0.040385 / 0.075469 (-0.035084) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950975 / 1.841788 (-0.890812) | 11.875024 / 8.074308 (3.800715) | 10.430301 / 10.191392 (0.238909) | 0.130546 / 0.680424 (-0.549878) | 0.015423 / 0.534201 (-0.518778) | 0.269592 / 0.579283 (-0.309691) | 0.282505 / 0.434364 (-0.151859) | 0.305567 / 0.540337 (-0.234771) | 0.522142 / 1.386936 (-0.864794) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004983 / 0.011353 (-0.006369) | 0.003346 / 0.011008 (-0.007662) | 0.062233 / 0.038508 (0.023725) | 0.050246 / 0.023109 (0.027137) | 0.305738 / 0.275898 (0.029839) | 0.321863 / 0.323480 (-0.001617) | 0.003870 / 0.007986 (-0.004116) | 0.002610 / 0.004328 (-0.001718) | 0.047734 / 0.004250 (0.043483) | 0.037611 / 0.037052 (0.000559) | 0.299121 / 0.258489 (0.040632) | 0.327370 / 0.293841 (0.033529) | 0.027009 / 0.128546 (-0.101537) | 0.010816 / 0.075646 (-0.064830) | 0.204627 / 0.419271 (-0.214645) | 0.035708 / 0.043533 (-0.007825) | 0.291837 / 0.255139 (0.036698) | 0.313646 / 0.283200 (0.030447) | 0.017277 / 0.141683 (-0.124405) | 1.097907 / 1.452155 (-0.354248) | 1.163203 / 1.492716 (-0.329513) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091933 / 0.018006 (0.073926) | 0.298787 / 0.000490 (0.298297) | 0.000204 / 0.000200 (0.000004) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018349 / 0.037411 (-0.019062) | 0.061520 / 0.014526 (0.046994) | 0.073159 / 0.176557 (-0.103397) | 0.118657 / 0.737135 (-0.618478) | 0.073601 / 0.296338 (-0.222737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276297 / 0.215209 (0.061088) | 2.725668 / 2.077655 (0.648013) | 1.458079 / 1.504120 (-0.046041) | 1.331236 / 1.541195 (-0.209959) | 1.347919 / 1.468490 (-0.120571) | 0.565954 / 4.584777 (-4.018823) | 2.380883 / 3.745712 (-1.364829) | 2.800533 / 5.269862 (-2.469329) | 1.740534 / 4.565676 (-2.825142) | 0.065617 / 0.424275 (-0.358658) | 0.004907 / 0.007607 (-0.002700) | 0.335973 / 0.226044 (0.109929) | 3.337405 / 2.268929 (1.068476) | 1.819852 / 55.444624 (-53.624772) | 1.542724 / 6.876477 (-5.333752) | 1.509508 / 2.142072 (-0.632565) | 0.648618 / 4.805227 (-4.156609) | 0.116812 / 6.500664 (-6.383852) | 0.041561 / 0.075469 (-0.033909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943488 / 1.841788 (-0.898299) | 11.184770 / 8.074308 (3.110462) | 10.406311 / 10.191392 (0.214919) | 0.129841 / 0.680424 (-0.550583) | 0.013736 / 0.534201 (-0.520465) | 0.287281 / 0.579283 (-0.292002) | 0.267403 / 0.434364 (-0.166961) | 0.325319 / 0.540337 (-0.215019) | 0.454207 / 1.386936 (-0.932729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005169 / 0.011353 (-0.006183) | 0.003155 / 0.011008 (-0.007854) | 0.048101 / 0.038508 (0.009593) | 0.048726 / 0.023109 (0.025617) | 0.275768 / 0.275898 (-0.000130) | 0.291209 / 0.323480 (-0.032271) | 0.003984 / 0.007986 (-0.004001) | 0.002586 / 0.004328 (-0.001742) | 0.047751 / 0.004250 (0.043500) | 0.040176 / 0.037052 (0.003124) | 0.279161 / 0.258489 (0.020672) | 0.297371 / 0.293841 (0.003530) | 0.028502 / 0.128546 (-0.100044) | 0.010103 / 0.075646 (-0.065544) | 0.056920 / 0.419271 (-0.362351) | 0.032174 / 0.043533 (-0.011359) | 0.271925 / 0.255139 (0.016786) | 0.289572 / 0.283200 (0.006372) | 0.017981 / 0.141683 (-0.123702) | 1.192972 / 1.452155 (-0.259183) | 1.223231 / 1.492716 (-0.269485) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091363 / 0.018006 (0.073356) | 0.298106 / 0.000490 (0.297616) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021509 / 0.037411 (-0.015902) | 0.068377 / 0.014526 (0.053851) | 0.079798 / 0.176557 (-0.096759) | 0.120546 / 0.737135 (-0.616589) | 0.080602 / 0.296338 (-0.215737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300809 / 0.215209 (0.085600) | 2.921144 / 2.077655 (0.843489) | 1.621096 / 1.504120 (0.116976) | 1.504265 / 1.541195 (-0.036930) | 1.508050 / 1.468490 (0.039560) | 0.554291 / 4.584777 (-4.030486) | 2.418798 / 3.745712 (-1.326914) | 2.768088 / 5.269862 (-2.501773) | 1.728267 / 4.565676 (-2.837410) | 0.062943 / 0.424275 (-0.361332) | 0.004891 / 0.007607 (-0.002716) | 0.350298 / 0.226044 (0.124254) | 3.442782 / 2.268929 (1.173853) | 1.960163 / 55.444624 (-53.484461) | 1.682000 / 6.876477 (-5.194477) | 1.680311 / 2.142072 (-0.461761) | 0.631201 / 4.805227 (-4.174026) | 0.115211 / 6.500664 (-6.385453) | 0.041279 / 0.075469 (-0.034190) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962478 / 1.841788 (-0.879310) | 11.671463 / 8.074308 (3.597155) | 10.640129 / 10.191392 (0.448737) | 0.130649 / 0.680424 (-0.549775) | 0.016169 / 0.534201 (-0.518032) | 0.286894 / 0.579283 (-0.292389) | 0.269319 / 0.434364 (-0.165045) | 0.324512 / 0.540337 (-0.215825) | 0.550874 / 1.386936 (-0.836062) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005078 / 0.011353 (-0.006275) | 0.003950 / 0.011008 (-0.007058) | 0.063345 / 0.038508 (0.024837) | 0.054486 / 0.023109 (0.031377) | 0.243213 / 0.275898 (-0.032685) | 0.264079 / 0.323480 (-0.059401) | 0.003922 / 0.007986 (-0.004064) | 0.002631 / 0.004328 (-0.001698) | 0.048660 / 0.004250 (0.044409) | 0.037205 / 0.037052 (0.000153) | 0.244577 / 0.258489 (-0.013912) | 0.276025 / 0.293841 (-0.017816) | 0.027134 / 0.128546 (-0.101412) | 0.010921 / 0.075646 (-0.064726) | 0.209792 / 0.419271 (-0.209479) | 0.035999 / 0.043533 (-0.007534) | 0.245671 / 0.255139 (-0.009468) | 0.262807 / 0.283200 (-0.020393) | 0.018173 / 0.141683 (-0.123510) | 1.084417 / 1.452155 (-0.367738) | 1.148284 / 1.492716 (-0.344432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093128 / 0.018006 (0.075122) | 0.301606 / 0.000490 (0.301117) | 0.000221 / 0.000200 (0.000021) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018718 / 0.037411 (-0.018693) | 0.060819 / 0.014526 (0.046293) | 0.073050 / 0.176557 (-0.103507) | 0.120043 / 0.737135 (-0.617092) | 0.075374 / 0.296338 (-0.220965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291080 / 0.215209 (0.075871) | 2.808802 / 2.077655 (0.731148) | 1.485686 / 1.504120 (-0.018434) | 1.354356 / 1.541195 (-0.186839) | 1.347863 / 1.468490 (-0.120627) | 0.571501 / 4.584777 (-4.013276) | 2.377960 / 3.745712 (-1.367752) | 2.768023 / 5.269862 (-2.501839) | 1.754360 / 4.565676 (-2.811316) | 0.063115 / 0.424275 (-0.361160) | 0.004941 / 0.007607 (-0.002666) | 0.338281 / 0.226044 (0.112237) | 3.340587 / 2.268929 (1.071658) | 1.849479 / 55.444624 (-53.595145) | 1.551846 / 6.876477 (-5.324631) | 1.539090 / 2.142072 (-0.602983) | 0.644522 / 4.805227 (-4.160705) | 0.117398 / 6.500664 (-6.383266) | 0.042239 / 0.075469 (-0.033230) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949496 / 1.841788 (-0.892291) | 11.548352 / 8.074308 (3.474044) | 10.478065 / 10.191392 (0.286673) | 0.129534 / 0.680424 (-0.550890) | 0.015378 / 0.534201 (-0.518822) | 0.287221 / 0.579283 (-0.292062) | 0.262944 / 0.434364 (-0.171419) | 0.321727 / 0.540337 (-0.218611) | 0.432354 / 1.386936 (-0.954582) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005256 / 0.011353 (-0.006097) | 0.003491 / 0.011008 (-0.007517) | 0.048647 / 0.038508 (0.010139) | 0.054011 / 0.023109 (0.030901) | 0.271786 / 0.275898 (-0.004112) | 0.291964 / 0.323480 (-0.031516) | 0.004035 / 0.007986 (-0.003950) | 0.002671 / 0.004328 (-0.001657) | 0.048108 / 0.004250 (0.043857) | 0.040421 / 0.037052 (0.003368) | 0.278594 / 0.258489 (0.020105) | 0.300707 / 0.293841 (0.006867) | 0.028924 / 0.128546 (-0.099623) | 0.010600 / 0.075646 (-0.065047) | 0.057649 / 0.419271 (-0.361623) | 0.034221 / 0.043533 (-0.009312) | 0.276692 / 0.255139 (0.021553) | 0.293545 / 0.283200 (0.010345) | 0.017908 / 0.141683 (-0.123775) | 1.135108 / 1.452155 (-0.317047) | 1.190823 / 1.492716 (-0.301893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095243 / 0.018006 (0.077237) | 0.301885 / 0.000490 (0.301396) | 0.000235 / 0.000200 (0.000035) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021561 / 0.037411 (-0.015850) | 0.069054 / 0.014526 (0.054529) | 0.080466 / 0.176557 (-0.096091) | 0.121323 / 0.737135 (-0.615812) | 0.081891 / 0.296338 (-0.214448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293957 / 0.215209 (0.078748) | 2.869035 / 2.077655 (0.791380) | 1.608837 / 1.504120 (0.104717) | 1.440594 / 1.541195 (-0.100601) | 1.464775 / 1.468490 (-0.003715) | 0.565663 / 4.584777 (-4.019114) | 2.439456 / 3.745712 (-1.306256) | 2.794775 / 5.269862 (-2.475087) | 1.750026 / 4.565676 (-2.815651) | 0.063291 / 0.424275 (-0.360984) | 0.004930 / 0.007607 (-0.002677) | 0.347169 / 0.226044 (0.121125) | 3.408260 / 2.268929 (1.139331) | 1.920933 / 55.444624 (-53.523691) | 1.648821 / 6.876477 (-5.227656) | 1.639022 / 2.142072 (-0.503051) | 0.642870 / 4.805227 (-4.162357) | 0.117077 / 6.500664 (-6.383587) | 0.040784 / 0.075469 (-0.034685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993501 / 1.841788 (-0.848287) | 12.012423 / 8.074308 (3.938115) | 10.740932 / 10.191392 (0.549540) | 0.132409 / 0.680424 (-0.548015) | 0.015294 / 0.534201 (-0.518907) | 0.287902 / 0.579283 (-0.291381) | 0.281350 / 0.434364 (-0.153014) | 0.329201 / 0.540337 (-0.211137) | 0.553199 / 1.386936 (-0.833737) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-15T19:06:42Z
| 2023-12-01T15:37:32Z
| 2023-12-01T15:31:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6426.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6426",
"merged_at": "2023-12-01T15:31:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6426.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6426"
}
|
While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler approach to the issue than https://github.com/huggingface/datasets/pull/2403 - it tries to delete the temporary cache directory, and if this fails, logs a warning message about using a `delete-temp-cache` CLI command to delete it manually. The problematic references are freed after the session exits, so the CLI command should then succeed.~~ This PR implements `Dataset.__setstate__` to register datasets with temporary cache files for deletion.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6426/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6426/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6349
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6349/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6349/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6349/events
|
https://github.com/huggingface/datasets/issues/6349
| 1,961,435,673
|
I_kwDODunzps506SIZ
| 6,349
|
Can't load ds = load_dataset("imdb")
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/86415736?v=4",
"events_url": "https://api.github.com/users/vivianc2/events{/privacy}",
"followers_url": "https://api.github.com/users/vivianc2/followers",
"following_url": "https://api.github.com/users/vivianc2/following{/other_user}",
"gists_url": "https://api.github.com/users/vivianc2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vivianc2",
"id": 86415736,
"login": "vivianc2",
"node_id": "MDQ6VXNlcjg2NDE1NzM2",
"organizations_url": "https://api.github.com/users/vivianc2/orgs",
"received_events_url": "https://api.github.com/users/vivianc2/received_events",
"repos_url": "https://api.github.com/users/vivianc2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vivianc2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivianc2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vivianc2"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I'm unable to reproduce this error. The server hosting the files may have been down temporarily, so try again."
] | 2023-10-25T13:29:51Z
| 2023-10-31T19:59:35Z
| 2023-10-31T19:59:35Z
|
NONE
| null | null | null |
### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as well as reinstalling dataset. I still face this problem.
### Steps to reproduce the bug
1. from datasets import load_dataset, load_metric
2. ds = load_dataset("imdb")
### Expected behavior
It should load and give me this when I run `ds`
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
unsupervised: Dataset({
features: ['text', 'label'],
num_rows: 50000
})
})
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.16.2
- PyArrow version: 13.0.0
- Pandas version: 2.0.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6349/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6349/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1897/events
|
https://github.com/huggingface/datasets/pull/1897
| 810,113,263
|
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
| 1,897
|
Fix PandasArrayExtensionArray conversion to native type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-17T11:48:24Z
| 2021-02-17T13:15:16Z
| 2021-02-17T13:15:15Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"merged_at": "2021-02-17T13:15:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1897"
}
|
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna method was wrong
2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray))
I fixed these two issues and now the conversion to native types works, and so is the export to csv.
cc @SBrandeis
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/339
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/339/comments
|
https://api.github.com/repos/huggingface/datasets/issues/339/events
|
https://github.com/huggingface/datasets/pull/339
| 650,156,468
|
MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw
| 339
|
Add dataset.export() to TFRecords
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.",
"For datasets with nested features we have two aspects to take into account:\r\n1) There can be nested dict of features. What is done in tensorflow_datasets to make things work is to flatten the dictionaries to end up with one single dictionary. A dict like `{\"column1\": {\"subfeature\": ...}}` is converted to `{\"column1/subfeature\":...}`\r\n2) There can be ragged tensors, i.e. lists of objects with non-fixed shapes. For example in squad there are often multiple possible answers per question. What is done in tensorflow_datasets to make things work is to concatenate everything and add ragged attributes (cf serialization code [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/example_serializer.py))",
"Note that we have `flatten` method in `ArrowDataset`",
"I added support for nested dictionaries. A few more design decisions popped up:\r\n\r\n_Should we serialize from NumPy arrays or from tf.Tensors?_\r\n- The [tfds example serializer](url) works from NumPy arrays.\r\n- Calling `dset.set_format(\"tensorflow\")` makes `__getitem__` return a tf.Tensor. So serializing from NumPy arrays would mean calling `dset.export()` before setting the format, which is confusing.\r\n- NumPy arrays can be serialized as their underlying datatype (int, float), while tf.Tensors must be converted to strings before serialization. This adds another step when serializing and deserializing, and removes the static-typing advantages of the TFRecord format.\r\n\r\nI think we should export directly from the underlying NumPy arrays into TFRecords, rather than using an intermediate step of tf.Tensor.\r\n\r\n_Should we serialize lists of dictionaries?_\r\n- The test_format_nested() test creates a list of dictionaries: https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/tests/test_arrow_dataset.py#L278-L288\r\n- This is difficult to serialize effectively, and I'm not aware of any dataset that has this format. SQuAD has a dictionary of lists, such as the `answers` key. Is this necessary?",
"Thanks @thomwolf, used dset.flatten() to simplify. That handles the case of nested dictionaries, and then lists can be read into a tf.io.RaggedFeature in the case of something like squad answers.",
"@jarednielsen I just checked and indeed we don't have lists of dicts, we can just focus on the squad format as a reference then :) I'll change the test to remove this format that's not supposed to happen",
"Actually I realised that `flatten` also handles nested things like pyarrow's list<struct> so it's fine :D \r\nThis is so cool !\r\n\r\nCould you also add a test with a squad-like dataset ? As soon as we have that I think we'll be good to merge @jarednielsen :)\r\nGood job !",
"Great, done! I think this could be a great canonical way to generate a dataset.",
"I tried to match the format of Dataset.sort() and Dataset.shuffle() with the docstring. What difference are you referring to specifically?",
"Oh my bad they're fine actually (I was thinking of the backticks that we don't use in the docstrings of the transformers repo for argument names)",
"One final thing: now that we have a brand new documentation, could you just add `export` to the list of documented methods in [docs/source/package_reference/main_classes.rst](https://github.com/huggingface/nlp/blob/master/docs/source/package_reference/main_classes.rst) (so that it will appear in the docs [here](https://huggingface.co/nlp/package_reference/main_classes.html)) ?\r\n",
"Done",
"Cool thanks :)",
"Since #403 (it just got merged), we return python objects and not numpy arrays anymore (unless format=\"numpy\" is specified).\r\nDo you think it can break the export method ? Could you try to rebase from master to run the CI to make sure it's fine ?",
"Good catch. I fixed it up so it works with the new format. By the way, when dset.format == \"numpy\", it now returns single items (like `0`) as a 0-dimensional NumPy array. Not sure if that is desired.",
"I played a little bit with the code and it works quite well :)\r\n\r\nI found two cases for which it doesn't work though:\r\n- if the features dict depth is > 2 (ex: wikisql), because `flatten` only flattens the first level of nesting (it can be fixed by calling `flatten` several times in a row, see [here](https://issues.apache.org/jira/browse/ARROW-4090))\r\n- Or if there are 2d features (ex: wikisql, `table.rows` is a sequence of sequences of strings), because tf.train.Features only support 1-d lists. That's why tensorflow-datasets flattens these 2-d features to 1-d and adds ragged features that are the shapes of the arrays, so that they can be reconstructed.\r\n\r\nI think we can ignore the 2d stuff right now (some work is being done in #363 ), but I'd like to see the `flatten` issue fixed soon\r\n",
"That seems like a bug in `pyarrow`, or at least in `flatten()`. Looks like it should be a separate PR.",
"I made `.flatten` work on our side (it calls pyarrow's flatten several times until it's really flat).\r\n\r\nThe only datasets that won't work are those with lists of lists of features, which is a rare case. Hopefully we can make this work with the multi-dimensional arrays changes we're also doing.\r\n\r\nI think we can merge now :) cc @thomwolf "
] | 2020-07-02T19:26:27Z
| 2020-07-22T09:16:12Z
| 2020-07-22T09:16:12Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/339",
"merged_at": "2020-07-22T09:16:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/339"
}
|
Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.
- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193.
- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.
- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.
Also, I noticed that
```python
dataset = dataset.select(indices)
dataset.set_format("tensorflow")
# dataset._format_type is "tensorflow"
```
gives a different output than
```python
dataset.set_format("tensorflow")
dataset = dataset.select(indices)
# dataset._format_type is None
```
The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/339/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/339/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1008/events
|
https://github.com/huggingface/datasets/pull/1008
| 755,372,798
|
MDExOlB1bGxSZXF1ZXN0NTMxMDk1ODQy
| 1,008
|
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Dupe of #1009 "
] | 2020-12-02T15:28:05Z
| 2020-12-02T15:40:55Z
| 2020-12-02T15:40:55Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1008",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1008"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1008/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1008/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/915/events
|
https://github.com/huggingface/datasets/issues/915
| 753,118,481
|
MDU6SXNzdWU3NTMxMTg0ODE=
| 915
|
Shall we change the hashing to encoding to reduce potential replicated cache files?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhuzilin",
"id": 10428324,
"login": "zhuzilin",
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhuzilin"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
open
| false
| null |
[] | null |
[
"This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?",
"@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset."
] | 2020-11-30T03:50:46Z
| 2020-12-24T05:11:49Z
| null |
NONE
| null | null | null |
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/915/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6415
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6415/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6415/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6415/events
|
https://github.com/huggingface/datasets/pull/6415
| 1,992,917,248
|
PR_kwDODunzps5fa4n7
| 6,415
|
Fix multi gpu map example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004537 / 0.011353 (-0.006816) | 0.002844 / 0.011008 (-0.008164) | 0.062506 / 0.038508 (0.023998) | 0.029675 / 0.023109 (0.006566) | 0.238080 / 0.275898 (-0.037818) | 0.259858 / 0.323480 (-0.063622) | 0.004015 / 0.007986 (-0.003970) | 0.002432 / 0.004328 (-0.001897) | 0.049477 / 0.004250 (0.045227) | 0.045383 / 0.037052 (0.008331) | 0.241934 / 0.258489 (-0.016555) | 0.270759 / 0.293841 (-0.023082) | 0.023207 / 0.128546 (-0.105339) | 0.007107 / 0.075646 (-0.068539) | 0.207626 / 0.419271 (-0.211645) | 0.056706 / 0.043533 (0.013173) | 0.239713 / 0.255139 (-0.015426) | 0.256639 / 0.283200 (-0.026560) | 0.017514 / 0.141683 (-0.124169) | 1.105201 / 1.452155 (-0.346953) | 1.173087 / 1.492716 (-0.319629) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093391 / 0.018006 (0.075384) | 0.302673 / 0.000490 (0.302184) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019447 / 0.037411 (-0.017965) | 0.063349 / 0.014526 (0.048823) | 0.075600 / 0.176557 (-0.100957) | 0.121098 / 0.737135 (-0.616037) | 0.075028 / 0.296338 (-0.221311) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291479 / 0.215209 (0.076270) | 2.787231 / 2.077655 (0.709576) | 1.480205 / 1.504120 (-0.023915) | 1.417656 / 1.541195 (-0.123538) | 1.394529 / 1.468490 (-0.073962) | 0.408843 / 4.584777 (-4.175934) | 2.398691 / 3.745712 (-1.347021) | 2.635457 / 5.269862 (-2.634404) | 1.591722 / 4.565676 (-2.973955) | 0.048445 / 0.424275 (-0.375830) | 0.004864 / 0.007607 (-0.002743) | 0.349014 / 0.226044 (0.122969) | 3.436962 / 2.268929 (1.168033) | 1.839266 / 55.444624 (-53.605359) | 1.535252 / 6.876477 (-5.341225) | 1.581048 / 2.142072 (-0.561025) | 0.491150 / 4.805227 (-4.314078) | 0.101279 / 6.500664 (-6.399385) | 0.041938 / 0.075469 (-0.033532) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946986 / 1.841788 (-0.894801) | 11.766196 / 8.074308 (3.691888) | 10.425615 / 10.191392 (0.234223) | 0.129957 / 0.680424 (-0.550467) | 0.014859 / 0.534201 (-0.519342) | 0.268046 / 0.579283 (-0.311237) | 0.263724 / 0.434364 (-0.170640) | 0.311028 / 0.540337 (-0.229309) | 0.434715 / 1.386936 (-0.952221) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004874 / 0.011353 (-0.006479) | 0.002942 / 0.011008 (-0.008067) | 0.048250 / 0.038508 (0.009742) | 0.053726 / 0.023109 (0.030617) | 0.268870 / 0.275898 (-0.007028) | 0.289152 / 0.323480 (-0.034328) | 0.003982 / 0.007986 (-0.004004) | 0.002488 / 0.004328 (-0.001840) | 0.047902 / 0.004250 (0.043652) | 0.038732 / 0.037052 (0.001680) | 0.271021 / 0.258489 (0.012532) | 0.299967 / 0.293841 (0.006126) | 0.024672 / 0.128546 (-0.103874) | 0.007311 / 0.075646 (-0.068336) | 0.053721 / 0.419271 (-0.365550) | 0.032407 / 0.043533 (-0.011126) | 0.266604 / 0.255139 (0.011465) | 0.286816 / 0.283200 (0.003617) | 0.018973 / 0.141683 (-0.122710) | 1.122460 / 1.452155 (-0.329695) | 1.177720 / 1.492716 (-0.314997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093968 / 0.018006 (0.075962) | 0.304010 / 0.000490 (0.303521) | 0.000228 / 0.000200 (0.000028) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021203 / 0.037411 (-0.016208) | 0.070318 / 0.014526 (0.055793) | 0.081688 / 0.176557 (-0.094869) | 0.120916 / 0.737135 (-0.616219) | 0.083452 / 0.296338 (-0.212886) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293961 / 0.215209 (0.078752) | 2.858514 / 2.077655 (0.780860) | 1.556169 / 1.504120 (0.052049) | 1.431523 / 1.541195 (-0.109671) | 1.478145 / 1.468490 (0.009654) | 0.408927 / 4.584777 (-4.175850) | 2.440630 / 3.745712 (-1.305082) | 2.586327 / 5.269862 (-2.683534) | 1.529495 / 4.565676 (-3.036182) | 0.047387 / 0.424275 (-0.376888) | 0.004817 / 0.007607 (-0.002790) | 0.345009 / 0.226044 (0.118965) | 3.386313 / 2.268929 (1.117384) | 1.922361 / 55.444624 (-53.522264) | 1.640814 / 6.876477 (-5.235663) | 1.657005 / 2.142072 (-0.485068) | 0.483844 / 4.805227 (-4.321383) | 0.099470 / 6.500664 (-6.401194) | 0.040735 / 0.075469 (-0.034734) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986311 / 1.841788 (-0.855476) | 12.327425 / 8.074308 (4.253117) | 10.995135 / 10.191392 (0.803743) | 0.146814 / 0.680424 (-0.533610) | 0.015820 / 0.534201 (-0.518381) | 0.272319 / 0.579283 (-0.306964) | 0.274858 / 0.434364 (-0.159506) | 0.305728 / 0.540337 (-0.234609) | 0.421400 / 1.386936 (-0.965536) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007995 / 0.011353 (-0.003358) | 0.004596 / 0.011008 (-0.006412) | 0.099818 / 0.038508 (0.061310) | 0.053539 / 0.023109 (0.030429) | 0.367757 / 0.275898 (0.091859) | 0.409351 / 0.323480 (0.085871) | 0.007423 / 0.007986 (-0.000563) | 0.003770 / 0.004328 (-0.000558) | 0.075635 / 0.004250 (0.071385) | 0.078844 / 0.037052 (0.041791) | 0.374523 / 0.258489 (0.116034) | 0.423378 / 0.293841 (0.129537) | 0.038901 / 0.128546 (-0.089645) | 0.009985 / 0.075646 (-0.065661) | 0.342793 / 0.419271 (-0.076479) | 0.098045 / 0.043533 (0.054512) | 0.368077 / 0.255139 (0.112938) | 0.394251 / 0.283200 (0.111051) | 0.030624 / 0.141683 (-0.111059) | 1.782728 / 1.452155 (0.330574) | 1.867571 / 1.492716 (0.374855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265550 / 0.018006 (0.247544) | 0.504045 / 0.000490 (0.503555) | 0.016523 / 0.000200 (0.016323) | 0.000757 / 0.000054 (0.000702) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034239 / 0.037411 (-0.003172) | 0.099953 / 0.014526 (0.085427) | 0.113728 / 0.176557 (-0.062829) | 0.180113 / 0.737135 (-0.557023) | 0.114506 / 0.296338 (-0.181833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507186 / 0.215209 (0.291977) | 5.033590 / 2.077655 (2.955935) | 2.480111 / 1.504120 (0.975991) | 2.258966 / 1.541195 (0.717771) | 2.316045 / 1.468490 (0.847555) | 0.622482 / 4.584777 (-3.962295) | 4.400909 / 3.745712 (0.655197) | 4.012443 / 5.269862 (-1.257419) | 2.408294 / 4.565676 (-2.157383) | 0.067608 / 0.424275 (-0.356668) | 0.008638 / 0.007607 (0.001031) | 0.546558 / 0.226044 (0.320513) | 5.472973 / 2.268929 (3.204044) | 2.795147 / 55.444624 (-52.649477) | 2.371153 / 6.876477 (-4.505324) | 2.440883 / 2.142072 (0.298811) | 0.682380 / 4.805227 (-4.122847) | 0.156819 / 6.500664 (-6.343845) | 0.071969 / 0.075469 (-0.003500) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.500200 / 1.841788 (-0.341588) | 22.854103 / 8.074308 (14.779795) | 16.691945 / 10.191392 (6.500553) | 0.210945 / 0.680424 (-0.469479) | 0.023234 / 0.534201 (-0.510967) | 0.475641 / 0.579283 (-0.103642) | 0.491553 / 0.434364 (0.057189) | 0.549311 / 0.540337 (0.008974) | 0.858498 / 1.386936 (-0.528439) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009020 / 0.011353 (-0.002333) | 0.004768 / 0.011008 (-0.006240) | 0.082841 / 0.038508 (0.044333) | 0.095111 / 0.023109 (0.072002) | 0.486050 / 0.275898 (0.210151) | 0.527074 / 0.323480 (0.203594) | 0.006622 / 0.007986 (-0.001364) | 0.003961 / 0.004328 (-0.000367) | 0.083361 / 0.004250 (0.079111) | 0.068571 / 0.037052 (0.031518) | 0.494575 / 0.258489 (0.236086) | 0.545593 / 0.293841 (0.251752) | 0.047671 / 0.128546 (-0.080875) | 0.010715 / 0.075646 (-0.064932) | 0.096239 / 0.419271 (-0.323033) | 0.061556 / 0.043533 (0.018023) | 0.484301 / 0.255139 (0.229162) | 0.492189 / 0.283200 (0.208989) | 0.029374 / 0.141683 (-0.112309) | 1.911833 / 1.452155 (0.459678) | 2.005744 / 1.492716 (0.513028) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265402 / 0.018006 (0.247396) | 0.501034 / 0.000490 (0.500545) | 0.004039 / 0.000200 (0.003839) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.041005 / 0.037411 (0.003594) | 0.119204 / 0.014526 (0.104678) | 0.134583 / 0.176557 (-0.041973) | 0.195995 / 0.737135 (-0.541140) | 0.133125 / 0.296338 (-0.163214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503012 / 0.215209 (0.287803) | 5.021972 / 2.077655 (2.944318) | 2.912987 / 1.504120 (1.408867) | 2.707637 / 1.541195 (1.166442) | 2.824065 / 1.468490 (1.355575) | 0.664285 / 4.584777 (-3.920492) | 4.341905 / 3.745712 (0.596193) | 4.152839 / 5.269862 (-1.117022) | 2.438138 / 4.565676 (-2.127539) | 0.076169 / 0.424275 (-0.348106) | 0.010471 / 0.007607 (0.002864) | 0.680918 / 0.226044 (0.454874) | 6.424209 / 2.268929 (4.155281) | 3.285353 / 55.444624 (-52.159271) | 2.865458 / 6.876477 (-4.011019) | 2.946246 / 2.142072 (0.804173) | 0.700051 / 4.805227 (-4.105176) | 0.155299 / 6.500664 (-6.345365) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.749517 / 1.841788 (-0.092271) | 23.382582 / 8.074308 (15.308274) | 17.708718 / 10.191392 (7.517326) | 0.197042 / 0.680424 (-0.483382) | 0.023874 / 0.534201 (-0.510327) | 0.471631 / 0.579283 (-0.107652) | 0.512649 / 0.434364 (0.078285) | 0.614479 / 0.540337 (0.074142) | 0.771859 / 1.386936 (-0.615077) |\n\n</details>\n</details>\n\n\n",
"Merging this one, but lmk if you have more comments for subsequent improvements @NielsRogge ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004874 / 0.011353 (-0.006479) | 0.002866 / 0.011008 (-0.008142) | 0.061761 / 0.038508 (0.023253) | 0.052185 / 0.023109 (0.029076) | 0.242264 / 0.275898 (-0.033634) | 0.267816 / 0.323480 (-0.055664) | 0.002844 / 0.007986 (-0.005142) | 0.002349 / 0.004328 (-0.001979) | 0.048393 / 0.004250 (0.044142) | 0.038590 / 0.037052 (0.001538) | 0.257483 / 0.258489 (-0.001006) | 0.279704 / 0.293841 (-0.014137) | 0.023125 / 0.128546 (-0.105421) | 0.007044 / 0.075646 (-0.068602) | 0.203606 / 0.419271 (-0.215665) | 0.035489 / 0.043533 (-0.008044) | 0.248419 / 0.255139 (-0.006719) | 0.266357 / 0.283200 (-0.016843) | 0.020178 / 0.141683 (-0.121505) | 1.163674 / 1.452155 (-0.288481) | 1.191340 / 1.492716 (-0.301376) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092972 / 0.018006 (0.074966) | 0.295260 / 0.000490 (0.294770) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018109 / 0.037411 (-0.019302) | 0.061743 / 0.014526 (0.047217) | 0.073965 / 0.176557 (-0.102592) | 0.119493 / 0.737135 (-0.617642) | 0.075646 / 0.296338 (-0.220692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275700 / 0.215209 (0.060491) | 2.666846 / 2.077655 (0.589191) | 1.401452 / 1.504120 (-0.102668) | 1.276009 / 1.541195 (-0.265186) | 1.309914 / 1.468490 (-0.158576) | 0.396411 / 4.584777 (-4.188365) | 2.347193 / 3.745712 (-1.398519) | 2.568006 / 5.269862 (-2.701856) | 1.564572 / 4.565676 (-3.001105) | 0.045450 / 0.424275 (-0.378825) | 0.004827 / 0.007607 (-0.002780) | 0.333092 / 0.226044 (0.107048) | 3.284295 / 2.268929 (1.015367) | 1.809928 / 55.444624 (-53.634696) | 1.486041 / 6.876477 (-5.390436) | 1.528198 / 2.142072 (-0.613875) | 0.470053 / 4.805227 (-4.335174) | 0.098559 / 6.500664 (-6.402105) | 0.041637 / 0.075469 (-0.033832) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948915 / 1.841788 (-0.892873) | 11.513211 / 8.074308 (3.438903) | 10.386419 / 10.191392 (0.195027) | 0.129513 / 0.680424 (-0.550910) | 0.021772 / 0.534201 (-0.512429) | 0.295627 / 0.579283 (-0.283656) | 0.261008 / 0.434364 (-0.173355) | 0.305869 / 0.540337 (-0.234469) | 0.399676 / 1.386936 (-0.987260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004799 / 0.011353 (-0.006553) | 0.002764 / 0.011008 (-0.008244) | 0.048469 / 0.038508 (0.009961) | 0.051346 / 0.023109 (0.028236) | 0.274853 / 0.275898 (-0.001045) | 0.300770 / 0.323480 (-0.022710) | 0.003986 / 0.007986 (-0.003999) | 0.002376 / 0.004328 (-0.001952) | 0.048545 / 0.004250 (0.044294) | 0.039854 / 0.037052 (0.002801) | 0.280053 / 0.258489 (0.021564) | 0.312797 / 0.293841 (0.018957) | 0.024513 / 0.128546 (-0.104033) | 0.006971 / 0.075646 (-0.068675) | 0.053030 / 0.419271 (-0.366241) | 0.035580 / 0.043533 (-0.007953) | 0.276078 / 0.255139 (0.020939) | 0.299345 / 0.283200 (0.016145) | 0.020423 / 0.141683 (-0.121260) | 1.103053 / 1.452155 (-0.349102) | 1.179747 / 1.492716 (-0.312969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093042 / 0.018006 (0.075036) | 0.299421 / 0.000490 (0.298932) | 0.000232 / 0.000200 (0.000033) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021966 / 0.037411 (-0.015445) | 0.070978 / 0.014526 (0.056452) | 0.083841 / 0.176557 (-0.092715) | 0.121223 / 0.737135 (-0.615912) | 0.082829 / 0.296338 (-0.213510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289436 / 0.215209 (0.074227) | 2.838074 / 2.077655 (0.760419) | 1.597013 / 1.504120 (0.092893) | 1.476888 / 1.541195 (-0.064307) | 1.504582 / 1.468490 (0.036092) | 0.398050 / 4.584777 (-4.186727) | 2.434446 / 3.745712 (-1.311266) | 2.493545 / 5.269862 (-2.776316) | 1.584159 / 4.565676 (-2.981517) | 0.046461 / 0.424275 (-0.377814) | 0.004876 / 0.007607 (-0.002731) | 0.344166 / 0.226044 (0.118122) | 3.388530 / 2.268929 (1.119602) | 1.939585 / 55.444624 (-53.505039) | 1.672495 / 6.876477 (-5.203982) | 1.811825 / 2.142072 (-0.330247) | 0.470798 / 4.805227 (-4.334429) | 0.097522 / 6.500664 (-6.403142) | 0.040887 / 0.075469 (-0.034582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990081 / 1.841788 (-0.851707) | 12.619827 / 8.074308 (4.545519) | 10.748062 / 10.191392 (0.556670) | 0.130409 / 0.680424 (-0.550015) | 0.016624 / 0.534201 (-0.517577) | 0.272381 / 0.579283 (-0.306902) | 0.270597 / 0.434364 (-0.163767) | 0.306458 / 0.540337 (-0.233879) | 0.408700 / 1.386936 (-0.978236) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-14T14:57:18Z
| 2023-11-22T15:48:27Z
| 2023-11-22T15:42:19Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6415.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6415",
"merged_at": "2023-11-22T15:42:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6415.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6415"
}
|
- use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES `
- add `if __name__ == "__main__"`
fix https://github.com/huggingface/datasets/issues/6186
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6415/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6415/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/758/events
|
https://github.com/huggingface/datasets/issues/758
| 728,638,559
|
MDU6SXNzdWU3Mjg2Mzg1NTk=
| 758
|
Process 0 very slow when using num_procs with map to tokenizer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocessing\r\nprint(multiprocessing.cpu_count())\r\n```\r\nWhich tokenizer are you using ?",
"Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.\r\nI have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.\r\n\r\nI can use up to 16 cores.",
"Ok weird, I don't manage to reproduce this issue on my side.\r\nDoes it happen even with `num_proc=2` for example ?\r\nAlso could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ?",
"Yes, I can confirm it also happens with ```num_proc=2```.\r\n```\r\ntokenizers 0.9.2\r\ndatasets 1.1.2\r\nmultiprocess 0.70.10\r\n```\r\n```\r\nLinux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\r\n```",
"I can't reproduce on my side unfortunately with the same versions.\r\n\r\nDo you have issues when doing multiprocessing with python ?\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom multiprocess import Pool, RLock\r\n\r\ndef process_data(shard):\r\n # implement\r\n\r\nnum_proc = 8\r\nshards = [] # implement, this must be a list of size num_proc\r\n\r\nwith Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n results = [pool.apply_async(process_data, shard=shard) for shard in shards]\r\n transformed_shards = [r.get() for r in results]\r\n```",
"Nah, I'll just wait a few hours. Thank you for helping, though."
] | 2020-10-24T02:40:20Z
| 2020-10-28T03:59:46Z
| 2020-10-28T03:59:45Z
|
NONE
| null | null | null |
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/758/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2754
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2754/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2754/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2754/events
|
https://github.com/huggingface/datasets/pull/2754
| 959,105,577
|
MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4
| 2,754
|
Generate metadata JSON for telugu_books dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-03T13:14:52Z
| 2021-08-04T08:49:02Z
| 2021-08-04T08:49:02Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2754",
"merged_at": "2021-08-04T08:49:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2754"
}
|
Related to #2743.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2754/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2754/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5179
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5179/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5179/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5179/events
|
https://github.com/huggingface/datasets/issues/5179
| 1,430,826,100
|
I_kwDODunzps5VSKx0
| 5,179
|
`map()` fails midway due to format incompatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Cc: @lhoestq ",
"You can end up with a list instead of a tensor if all the tensors inside the list can't be stacked together - can you make sure all your inputs are tensors with the same shape ?",
"Is there an easy way to ensure it?",
"You can make sure your `tokenize` function always return tensors of the same shape",
"I modified my `tokenize()` function to be like so:\r\n\r\n```py\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\")\r\n```\r\n\r\nso that the padding always happens w.r.t to the length of the longest sequence in a batch. The issue still persists. Is there any other way? ",
"tbh I though your first implementation was fine\r\n```python\r\ndef tokenize(batch):\r\n return tokenizer(batch[\"text\"], padding=True, truncation=True)\r\n```\r\n\r\nMaybe you can try to see what the erroring data looks like by adding a try/except in `get_test_accuracy` ?",
"This is what I got. \r\n\r\nFor the non-erroring data, it looks like (without the labels):\r\n\r\n```\r\ntensor([[ 101, 10047, 3110, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n [ 101, 1045, 2514, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 1045, 2005, ..., 0, 0, 0],\r\n [ 101, 1045, 2572, ..., 0, 0, 0],\r\n [ 101, 10047, 7481, ..., 0, 0, 0]]) 128\r\ntensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]]) 128\r\n```\r\n\r\nFor the erroring part:\r\n\r\n```\r\n[tensor([ 101, 1045, 2064, 2102, 2393, 3110, 2066, 2242, 6355, 3047, 2004, 2574,\r\n 2004, 1996, 8629, 2357, 2125, 4299, 1045, 2071, 2424, 2009, 2006, 7858,\r\n 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 10047, 5458, 1997, 3110, 11654, 1998, 11055, 102, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]), tensor([ 101, 1045, 2074, 2064, 2102, 6073, 1996, 3110, 2008, 2026,\r\n 14982, 2000, 5587, 2203, 16650, 29563, 2030, 2569, 4506, 2052,\r\n 2191, 1037, 2738, 11552, 2208, 17044, 14540, 2100, 3375, 102,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0]),\r\n...\r\n\r\n[tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),\r\n...\r\n```\r\n\r\nI also tried investigating the shapes of the individual entries within a `batch` without the labels:\r\n\r\n```py\r\ndef get_test_accuracy(model):\r\n def fn(batch): \r\n try:\r\n inputs = {k:v.to(device) for k,v in batch.items() \r\n if k in tokenizer.model_input_names}\r\n with torch.no_grad():\r\n output = model(**inputs)\r\n pred_label = torch.argmax(output.logits, axis=-1)\r\n return {\"predicted_label\": pred_label.cpu().numpy()}\r\n except:\r\n for k in batch:\r\n if k != \"label\":\r\n for i in range(len(batch[k])):\r\n print(batch[k][i].shape)\r\n return fn\r\n```\r\n\r\nThey are:\r\n\r\n```\r\n...\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([66])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\ntorch.Size([69])\r\n```\r\n\r\nThere are differing shapes. I understand if I set `batch_size=None` in `emotions_encoded = emotions.map(tokenize, batched=True)` the problem should be fixed as the whole dataset would be treated as a single batch. But is there a way to do that in batches? ",
"If you use the same batch_size for your two maps, you should get the exact same batches - therefore all containing the same shapes",
"Oh I see. Thanks. Closing this issue. "
] | 2022-11-01T03:57:59Z
| 2022-11-08T11:35:26Z
| 2022-11-08T11:35:26Z
|
MEMBER
| null | null | null |
### Describe the bug
I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset.
```py
def get_test_accuracy(model):
def fn(batch):
inputs = {k:v.to(device) for k,v in batch.items()
if k in tokenizer.model_input_names}
with torch.no_grad():
output = model(**inputs)
pred_label = torch.argmax(output.logits, axis=-1)
return {"predicted_label": pred_label.cpu().numpy()}
return fn
```
This is how the `get_test_accuracy()` is being used:
```py
emotions = load_dataset("emotion")
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
emotions_encoded = emotions.map(tokenize, batched=True)
emotions_encoded.set_format("torch",
columns=["input_ids", "attention_mask", "label"])
new_dataset = emotions_encoded["validation"].map(
accuracy_fn, batched=True, batch_size=128
)
```
Complete code is available in the Colab Notebook provided below.
The `map()` process fails midway giving:
```shell
AttributeError Traceback (most recent call last)
<ipython-input-8-ad24ac288eb4> in <module>
2
3 new_dataset = emotions_encoded["validation"].map(
----> 4 accuracy_fn, batched=True, batch_size=128
5 )
7 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2588 new_fingerprint=new_fingerprint,
2589 disable_tqdm=disable_tqdm,
-> 2590 desc=desc,
2591 )
2592 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
582 self: "Dataset" = kwargs.pop("self")
583 # apply actual function
--> 584 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
585 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
586 for dataset in datasets:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2970 indices,
2971 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 2972 offset=offset,
2973 )
2974 except NumExamplesMismatchError:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2850 if with_rank:
2851 additional_args += (rank,)
-> 2852 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2853 if update_data is None:
2854 # Check if the function returns updated examples
<ipython-input-6-4e0d280426f6> in fn(batch)
1 def get_test_accuracy(model):
2 def fn(batch):
----> 3 inputs = {k:v.to(device) for k,v in batch.items()
4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
<ipython-input-6-4e0d280426f6> in <dictcomp>(.0)
2 def fn(batch):
3 inputs = {k:v.to(device) for k,v in batch.items()
----> 4 if k in tokenizer.model_input_names}
5 with torch.no_grad():
6 output = model(**inputs)
AttributeError: 'list' object has no attribute 'to'
```
As you'd notice in the notebook, the process fails _midway_ and not at the beginning.
Is this expected?
### Steps to reproduce the bug
Colab Notebook:
https://colab.research.google.com/gist/sayakpaul/d1570d537faf39040d02d77b1ed7de07/scratchpad.ipynb
### Expected behavior
The mapping process should complete as is. If you switch the `split` to `test` it works as expected.
### Environment info
Colab
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5179/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5179/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2331
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2331/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2331/events
|
https://github.com/huggingface/datasets/issues/2331
| 879,031,427
|
MDU6SXNzdWU4NzkwMzE0Mjc=
| 2,331
|
Add Topical-Chat
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ktangri",
"id": 22266659,
"login": "ktangri",
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"organizations_url": "https://api.github.com/users/ktangri/orgs",
"received_events_url": "https://api.github.com/users/ktangri/received_events",
"repos_url": "https://api.github.com/users/ktangri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktangri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ktangri"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] | null |
[] | 2021-05-07T13:43:59Z
| 2021-05-07T13:43:59Z
| null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2331/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5294
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5294/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5294/events
|
https://github.com/huggingface/datasets/pull/5294
| 1,463,679,582
|
PR_kwDODunzps5DqgLW
| 5,294
|
Support streaming datasets with pathlib.Path.with_suffix
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-24T18:04:38Z
| 2022-11-29T07:09:08Z
| 2022-11-29T07:06:32Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294"
}
|
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5294/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3842/events
|
https://github.com/huggingface/datasets/pull/3842
| 1,161,336,483
|
PR_kwDODunzps40CZvE
| 3,842
|
Align IterableDataset.shuffle with Dataset.shuffle
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.",
"We should also add `generator` as a param to `shuffle` to fully align the APIs, no?",
"I added the `generator` argument.\r\n\r\nI had to make a few other adjustments to make it work. In particular when you call `set_epoch()` on a streaming dataset, it updates the underlying random generator by using a new effective seed. The effective seed is generated using the previous generator and the epoch number."
] | 2022-03-07T12:10:46Z
| 2022-03-07T19:03:43Z
| 2022-03-07T19:03:42Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3842.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3842",
"merged_at": "2022-03-07T19:03:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3842.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3842"
}
|
From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode).
Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead.
In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3842/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3894/events
|
https://github.com/huggingface/datasets/pull/3894
| 1,166,611,270
|
PR_kwDODunzps40TzXW
| 3,894
|
[docs] make dummy data creation optional
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page with users 🙃 "
] | 2022-03-11T16:21:34Z
| 2022-03-11T17:27:56Z
| 2022-03-11T17:27:55Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"merged_at": "2022-03-11T17:27:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3894"
}
|
Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3894/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1537/events
|
https://github.com/huggingface/datasets/pull/1537
| 765,095,210
|
MDExOlB1bGxSZXF1ZXN0NTM4ODY1NzIz
| 1,537
|
added ohsumed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4",
"events_url": "https://api.github.com/users/skyprince999/events{/privacy}",
"followers_url": "https://api.github.com/users/skyprince999/followers",
"following_url": "https://api.github.com/users/skyprince999/following{/other_user}",
"gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/skyprince999",
"id": 9033954,
"login": "skyprince999",
"node_id": "MDQ6VXNlcjkwMzM5NTQ=",
"organizations_url": "https://api.github.com/users/skyprince999/orgs",
"received_events_url": "https://api.github.com/users/skyprince999/received_events",
"repos_url": "https://api.github.com/users/skyprince999/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions",
"type": "User",
"url": "https://api.github.com/users/skyprince999"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-13T06:58:23Z
| 2020-12-17T18:28:16Z
| 2020-12-17T18:28:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1537",
"merged_at": "2020-12-17T18:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1537"
}
|
UPDATE2: PR passed all tests. Now waiting for review.
UPDATE: pushed a new version. cross fingers that it should complete all the tests! :)
If it passes all tests then it's not a draft version.
This is a draft version
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1537/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1537/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2038/events
|
https://github.com/huggingface/datasets/issues/2038
| 830,036,875
|
MDU6SXNzdWU4MzAwMzY4NzU=
| 2,038
|
outdated dataset_infos.json might fail verifications
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | 2021-03-12T11:41:54Z
| 2021-03-16T16:27:40Z
| 2021-03-16T16:27:40Z
|
CONTRIBUTOR
| null | null | null |
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4752
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4752/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4752/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4752/events
|
https://github.com/huggingface/datasets/issues/4752
| 1,319,464,409
|
I_kwDODunzps5OpW3Z
| 4,752
|
DatasetInfo issue when testing multiple configs: mixed task_templates
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache/modules/dataset_modules/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`.",
"Ugh. Found the issue: apparently `datasets` was reusing the already existing `dataset_infos.json` that is inside `datasets/datasets/hebban-reviews`! Is this desired behavior?\r\n\r\nPerhaps when `--save_infos` and `--all_configs` are given, an existing `dataset_infos.json` file should first be deleted before continuing with the test? Because that would assume that the user wants to create a new infos file for all configs anyway.",
"Hi! I think this is a reasonable solution. Would you be interested in submitting a PR?"
] | 2022-07-27T12:04:54Z
| 2022-08-08T18:20:50Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data from unfiltered.json.gz (I'd want this without splits, just one chunk of data, but that does not seem possible?)
- filtered_sentiment: `review_sentiment` as ClassLabel, TextClassification task with `review_sentiment` as label. Gets train/test split from respective json.gz files
- filtered_rating: `review_rating0` as ClassLabel, TextClassification task with `review_rating0` as label. Gets train/test split from respective json.gz files
This might be a bit tedious to reproduce, so I am sorry, but these are the steps:
- Clone datasets -> `datasets/` and install it
- Clone `https://huggingface.co/datasets/BramVanroy/hebban-reviews` into `datasets/datasets` so that you have a new folder `datasets/datasets/hebban-reviews/`.
- Replace the HebbanReviews class with this new one:
```python
class HebbanReviews(datasets.GeneratorBasedBuilder):
"""The Hebban book reviews dataset."""
BUILDER_CONFIGS = [
HebbanReviewsConfig(
name="unfiltered",
description=_HEBBAN_REVIEWS_UNFILTERED_DESCRIPTION,
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_sentiment",
description=f"This config has the negative, neutral, and positive sentiment scores as ClassLabel in the 'review_sentiment' column.\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_rating",
description=f"This config has the 5-class ratings as ClassLabel in the 'review_rating0' column (which is a variant of 'review_rating' that starts counting from 0 instead of 1).\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
)
]
DEFAULT_CONFIG_NAME = "filtered_sentiment"
_URLS = {
"train": "train.jsonl.gz",
"test": "test.jsonl.gz",
"unfiltered": "unfiltered.jsonl.gz",
}
def _info(self):
features = {
"review_title": datasets.Value("string"),
"review_text": datasets.Value("string"),
"review_text_without_quotes": datasets.Value("string"),
"review_n_quotes": datasets.Value("int32"),
"review_n_tokens": datasets.Value("int32"),
"review_rating": datasets.Value("int32"),
"review_rating0": datasets.Value("int32"),
"review_author_url": datasets.Value("string"),
"review_author_type": datasets.Value("string"),
"review_n_likes": datasets.Value("int32"),
"review_n_comments": datasets.Value("int32"),
"review_url": datasets.Value("string"),
"review_published_date": datasets.Value("string"),
"review_crawl_date": datasets.Value("string"),
"lid": datasets.Value("string"),
"lid_probability": datasets.Value("float32"),
"review_sentiment": datasets.features.ClassLabel(names=["negative", "neutral", "positive"]),
"review_sentiment_label": datasets.Value("string"),
"book_id": datasets.Value("int32"),
}
if self.config.name == "filtered_sentiment":
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_sentiment")]
elif self.config.name == "filtered_rating":
# For CrossEntropy, our classes need to start at index 0 -- not 1
features["review_rating0"] = datasets.features.ClassLabel(names=["1", "2", "3", "4", "5"])
features["review_sentiment"] = datasets.Value("int32")
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_rating0")]
elif self.config.name == "unfiltered": # no ClassLabels in unfiltered
features["review_sentiment"] = datasets.Value("int32")
task_templates = None
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
print("AT INFO", self.config.name, task_templates)
return datasets.DatasetInfo(
description=self.config.description,
features=datasets.Features(features),
homepage="https://huggingface.co/datasets/BramVanroy/hebban-reviews",
citation=_HEBBAN_REVIEWS_CITATION,
task_templates=task_templates,
license="cc-by-4.0"
)
def _split_generators(self, dl_manager):
if self.config.name.startswith("filtered"):
files = dl_manager.download_and_extract({"train": "train.jsonl.gz",
"test": "test.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"data_file": files["test"]
},
),
]
elif self.config.name == "unfiltered":
files = dl_manager.download_and_extract({"train": "unfiltered.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
]
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
def _generate_examples(self, data_file):
lines = Path(data_file).open(encoding="utf-8").readlines()
for line_idx, line in enumerate(lines):
row = json.loads(line)
yield line_idx, row
```
- finally, run `datasets-cli test ./datasets/hebban-reviews/ --save_infos --all_configs` from within the topmost `datasets` directory
## Expected results
Succeeding tests for three different configs.
## Actual results
I printed out the values that are given to `DatasetInfo` for config name and task_templates, as you can see. There, as expected, I get `unfiltered None`. I also modified datasets/info.py and added this line [at L.170](https://github.com/huggingface/datasets/blob/f5847a304aa1b38b3a3c54a8318b4df60f1299bc/src/datasets/info.py#L170):
```python
print("INTERNALLY AT INFO.PY", self.config_name, self.task_templates)
```
to my surprise, here I get `unfiltered [TextClassification(task='text-classification', text_column='review_text_without_quotes', label_column='review_sentiment')]`. So one way or another, here I suddenly see that `unfiltered` now does have a task_template -- even though that is not what is written in the data loading script, as the first print statement correctly shows.
I do not quite understand how, but it seems that the config name and task_templates get mixed.
This ultimately leads to the following error, but this trace may not be very useful in itself:
```
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\hebban-U6poXNQd\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "c:\dev\python\hebban\datasets\src\datasets\commands\datasets_cli.py", line 39, in main
service.run()
File "c:\dev\python\hebban\datasets\src\datasets\commands\test.py", line 144, in run
builder.as_dataset()
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 899, in as_dataset
datasets = map_nested(
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 393, in map_nested
mapped = [
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 930, in _build_single_dataset
ds = self._as_dataset(
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 1006, in _as_dataset
return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File "c:\dev\python\hebban\datasets\src\datasets\arrow_dataset.py", line 661, in __init__
info = info.copy() if info is not None else DatasetInfo()
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 286, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 176, in __post_init__
self.task_templates = [
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 177, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "c:\dev\python\hebban\datasets\src\datasets\tasks\text_classification.py", line 22, in align_with_features
raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
ValueError: Column review_sentiment is not a ClassLabel.
```
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4752/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4752/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/305
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/305/comments
|
https://api.github.com/repos/huggingface/datasets/issues/305/events
|
https://github.com/huggingface/datasets/issues/305
| 644,148,149
|
MDU6SXNzdWU2NDQxNDgxNDk=
| 305
|
Importing downloaded package repository fails
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
|
[
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2020-06-23T21:09:05Z
| 2020-07-30T16:44:23Z
| 2020-07-30T16:44:23Z
|
MEMBER
| null | null | null |
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to have trouble with imports within the package. For example:
```
import nlp
coval = nlp.load_metric('coval')
```
yields:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module>
from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module>
from conll import mention
ModuleNotFoundError: No module named 'conll'
```
Not sure what the fix would be there.
|
{
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/305/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5501
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5501/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5501/events
|
https://github.com/huggingface/datasets/pull/5501
| 1,569,644,159
|
PR_kwDODunzps5JMTn8
| 5,501
|
Increase chunk size for speeding up file downloads
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008407 / 0.011353 (-0.002946) | 0.004651 / 0.011008 (-0.006357) | 0.100367 / 0.038508 (0.061859) | 0.029107 / 0.023109 (0.005998) | 0.302798 / 0.275898 (0.026900) | 0.354379 / 0.323480 (0.030899) | 0.006985 / 0.007986 (-0.001001) | 0.003365 / 0.004328 (-0.000963) | 0.078312 / 0.004250 (0.074062) | 0.034205 / 0.037052 (-0.002847) | 0.310431 / 0.258489 (0.051941) | 0.346239 / 0.293841 (0.052398) | 0.033800 / 0.128546 (-0.094747) | 0.011515 / 0.075646 (-0.064131) | 0.323588 / 0.419271 (-0.095684) | 0.040766 / 0.043533 (-0.002767) | 0.300914 / 0.255139 (0.045775) | 0.332983 / 0.283200 (0.049784) | 0.087500 / 0.141683 (-0.054182) | 1.469505 / 1.452155 (0.017350) | 1.505119 / 1.492716 (0.012403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187319 / 0.018006 (0.169313) | 0.405498 / 0.000490 (0.405008) | 0.001000 / 0.000200 (0.000800) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.098096 / 0.014526 (0.083570) | 0.104272 / 0.176557 (-0.072284) | 0.142801 / 0.737135 (-0.594335) | 0.109749 / 0.296338 (-0.186590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423343 / 0.215209 (0.208134) | 4.215116 / 2.077655 (2.137461) | 1.899714 / 1.504120 (0.395594) | 1.689579 / 1.541195 (0.148384) | 1.710292 / 1.468490 (0.241801) | 0.690976 / 4.584777 (-3.893801) | 3.432501 / 3.745712 (-0.313212) | 1.899600 / 5.269862 (-3.370261) | 1.279801 / 4.565676 (-3.285876) | 0.082763 / 0.424275 (-0.341512) | 0.012545 / 0.007607 (0.004938) | 0.531381 / 0.226044 (0.305336) | 5.320077 / 2.268929 (3.051148) | 2.370705 / 55.444624 (-53.073919) | 2.007089 / 6.876477 (-4.869388) | 2.062412 / 2.142072 (-0.079661) | 0.814998 / 4.805227 (-3.990229) | 0.149822 / 6.500664 (-6.350842) | 0.064399 / 0.075469 (-0.011070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226196 / 1.841788 (-0.615591) | 13.823443 / 8.074308 (5.749134) | 13.813667 / 10.191392 (3.622275) | 0.161289 / 0.680424 (-0.519135) | 0.028569 / 0.534201 (-0.505632) | 0.390360 / 0.579283 (-0.188923) | 0.396217 / 0.434364 (-0.038147) | 0.483120 / 0.540337 (-0.057217) | 0.570041 / 1.386936 (-0.816895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006422 / 0.011353 (-0.004931) | 0.004528 / 0.011008 (-0.006481) | 0.076043 / 0.038508 (0.037535) | 0.027631 / 0.023109 (0.004522) | 0.340622 / 0.275898 (0.064724) | 0.376694 / 0.323480 (0.053214) | 0.004993 / 0.007986 (-0.002992) | 0.003403 / 0.004328 (-0.000926) | 0.074521 / 0.004250 (0.070270) | 0.037568 / 0.037052 (0.000516) | 0.343423 / 0.258489 (0.084934) | 0.387729 / 0.293841 (0.093888) | 0.031790 / 0.128546 (-0.096757) | 0.011767 / 0.075646 (-0.063879) | 0.085182 / 0.419271 (-0.334090) | 0.042867 / 0.043533 (-0.000666) | 0.341269 / 0.255139 (0.086130) | 0.368460 / 0.283200 (0.085261) | 0.090153 / 0.141683 (-0.051530) | 1.536490 / 1.452155 (0.084335) | 1.596403 / 1.492716 (0.103686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222373 / 0.018006 (0.204367) | 0.396145 / 0.000490 (0.395655) | 0.000384 / 0.000200 (0.000184) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024801 / 0.037411 (-0.012610) | 0.099711 / 0.014526 (0.085185) | 0.106094 / 0.176557 (-0.070463) | 0.147819 / 0.737135 (-0.589316) | 0.110065 / 0.296338 (-0.186274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442863 / 0.215209 (0.227654) | 4.420043 / 2.077655 (2.342388) | 2.070136 / 1.504120 (0.566016) | 1.862363 / 1.541195 (0.321168) | 1.910890 / 1.468490 (0.442400) | 0.702570 / 4.584777 (-3.882207) | 3.435855 / 3.745712 (-0.309857) | 1.871290 / 5.269862 (-3.398572) | 1.169321 / 4.565676 (-3.396355) | 0.083674 / 0.424275 (-0.340601) | 0.012823 / 0.007607 (0.005216) | 0.539330 / 0.226044 (0.313285) | 5.403317 / 2.268929 (3.134389) | 2.536508 / 55.444624 (-52.908117) | 2.179629 / 6.876477 (-4.696847) | 2.207586 / 2.142072 (0.065514) | 0.812256 / 4.805227 (-3.992972) | 0.152915 / 6.500664 (-6.347749) | 0.068431 / 0.075469 (-0.007038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294982 / 1.841788 (-0.546806) | 13.912811 / 8.074308 (5.838503) | 13.415658 / 10.191392 (3.224266) | 0.149531 / 0.680424 (-0.530893) | 0.016785 / 0.534201 (-0.517416) | 0.381055 / 0.579283 (-0.198228) | 0.392084 / 0.434364 (-0.042280) | 0.472614 / 0.540337 (-0.067724) | 0.559799 / 1.386936 (-0.827137) |\n\n</details>\n</details>\n\n\n",
"We simply do GET requests to hf.co to download files from the Hub right now. We may switch to hfh when we update how we do caching \r\n\r\nYou can try on any dataset hosted on the hub like `imagenet-1k`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010931 / 0.011353 (-0.000422) | 0.005730 / 0.011008 (-0.005278) | 0.116653 / 0.038508 (0.078145) | 0.041439 / 0.023109 (0.018330) | 0.359559 / 0.275898 (0.083661) | 0.408398 / 0.323480 (0.084918) | 0.009193 / 0.007986 (0.001208) | 0.006024 / 0.004328 (0.001695) | 0.087743 / 0.004250 (0.083492) | 0.048636 / 0.037052 (0.011584) | 0.363133 / 0.258489 (0.104643) | 0.407144 / 0.293841 (0.113303) | 0.044610 / 0.128546 (-0.083936) | 0.014075 / 0.075646 (-0.061571) | 0.396506 / 0.419271 (-0.022766) | 0.057014 / 0.043533 (0.013482) | 0.358254 / 0.255139 (0.103115) | 0.399887 / 0.283200 (0.116687) | 0.115337 / 0.141683 (-0.026346) | 1.731655 / 1.452155 (0.279500) | 1.813276 / 1.492716 (0.320560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210197 / 0.018006 (0.192191) | 0.475887 / 0.000490 (0.475397) | 0.003323 / 0.000200 (0.003123) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031686 / 0.037411 (-0.005725) | 0.131167 / 0.014526 (0.116641) | 0.137919 / 0.176557 (-0.038637) | 0.184843 / 0.737135 (-0.552293) | 0.144998 / 0.296338 (-0.151340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471371 / 0.215209 (0.256162) | 4.693739 / 2.077655 (2.616084) | 2.251567 / 1.504120 (0.747447) | 1.993653 / 1.541195 (0.452458) | 2.053236 / 1.468490 (0.584746) | 0.809226 / 4.584777 (-3.775551) | 4.494120 / 3.745712 (0.748408) | 2.436921 / 5.269862 (-2.832940) | 1.541973 / 4.565676 (-3.023704) | 0.098401 / 0.424275 (-0.325874) | 0.014329 / 0.007607 (0.006722) | 0.597813 / 0.226044 (0.371769) | 5.964035 / 2.268929 (3.695107) | 2.709283 / 55.444624 (-52.735341) | 2.323537 / 6.876477 (-4.552940) | 2.401707 / 2.142072 (0.259635) | 0.976379 / 4.805227 (-3.828848) | 0.194638 / 6.500664 (-6.306026) | 0.076904 / 0.075469 (0.001435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516877 / 1.841788 (-0.324911) | 18.228010 / 8.074308 (10.153702) | 16.631750 / 10.191392 (6.440358) | 0.176030 / 0.680424 (-0.504394) | 0.033769 / 0.534201 (-0.500432) | 0.520511 / 0.579283 (-0.058773) | 0.531764 / 0.434364 (0.097400) | 0.648658 / 0.540337 (0.108321) | 0.779124 / 1.386936 (-0.607812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002718) | 0.005785 / 0.011008 (-0.005223) | 0.087042 / 0.038508 (0.048534) | 0.039632 / 0.023109 (0.016523) | 0.419719 / 0.275898 (0.143821) | 0.463860 / 0.323480 (0.140380) | 0.006621 / 0.007986 (-0.001364) | 0.004655 / 0.004328 (0.000327) | 0.087003 / 0.004250 (0.082753) | 0.057122 / 0.037052 (0.020069) | 0.417820 / 0.258489 (0.159331) | 0.485981 / 0.293841 (0.192140) | 0.042606 / 0.128546 (-0.085940) | 0.014369 / 0.075646 (-0.061278) | 0.101939 / 0.419271 (-0.317333) | 0.058303 / 0.043533 (0.014770) | 0.415053 / 0.255139 (0.159914) | 0.439914 / 0.283200 (0.156714) | 0.134628 / 0.141683 (-0.007055) | 1.765464 / 1.452155 (0.313309) | 1.843963 / 1.492716 (0.351247) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307156 / 0.018006 (0.289150) | 0.476657 / 0.000490 (0.476167) | 0.019718 / 0.000200 (0.019518) | 0.000160 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035286 / 0.037411 (-0.002125) | 0.138094 / 0.014526 (0.123568) | 0.144768 / 0.176557 (-0.031789) | 0.191386 / 0.737135 (-0.545750) | 0.151988 / 0.296338 (-0.144350) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504733 / 0.215209 (0.289523) | 5.027048 / 2.077655 (2.949394) | 2.441571 / 1.504120 (0.937451) | 2.198242 / 1.541195 (0.657047) | 2.298473 / 1.468490 (0.829983) | 0.848048 / 4.584777 (-3.736729) | 4.613102 / 3.745712 (0.867390) | 2.522824 / 5.269862 (-2.747037) | 1.610159 / 4.565676 (-2.955517) | 0.105197 / 0.424275 (-0.319078) | 0.015195 / 0.007607 (0.007588) | 0.626976 / 0.226044 (0.400932) | 6.268459 / 2.268929 (3.999530) | 3.014387 / 55.444624 (-52.430237) | 2.554102 / 6.876477 (-4.322375) | 2.656051 / 2.142072 (0.513979) | 1.027978 / 4.805227 (-3.777249) | 0.200686 / 6.500664 (-6.299978) | 0.077104 / 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.485228 / 1.841788 (-0.356560) | 18.319949 / 8.074308 (10.245641) | 15.855739 / 10.191392 (5.664347) | 0.204365 / 0.680424 (-0.476059) | 0.023824 / 0.534201 (-0.510377) | 0.505000 / 0.579283 (-0.074283) | 0.502866 / 0.434364 (0.068502) | 0.629574 / 0.540337 (0.089237) | 0.746602 / 1.386936 (-0.640334) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-03T10:50:10Z
| 2023-02-09T11:04:11Z
| null |
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501"
}
|
Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5501/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2003
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2003/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2003/events
|
https://github.com/huggingface/datasets/issues/2003
| 824,034,678
|
MDU6SXNzdWU4MjQwMzQ2Nzg=
| 2,003
|
Messages are being printed to the `stdout`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4",
"events_url": "https://api.github.com/users/mahnerak/events{/privacy}",
"followers_url": "https://api.github.com/users/mahnerak/followers",
"following_url": "https://api.github.com/users/mahnerak/following{/other_user}",
"gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mahnerak",
"id": 1367529,
"login": "mahnerak",
"node_id": "MDQ6VXNlcjEzNjc1Mjk=",
"organizations_url": "https://api.github.com/users/mahnerak/orgs",
"received_events_url": "https://api.github.com/users/mahnerak/received_events",
"repos_url": "https://api.github.com/users/mahnerak/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mahnerak"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?",
"@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message will not be shown to the user.\r\n\r\nInstead, we should print this message to `stderr`. Even in the case of `python run_glue.py > log_file` only `stdout` is being redirected and so the message is always shown.",
"We now log these messages to `stderr` (the built-in `logging` module's default behavior). "
] | 2021-03-07T22:09:34Z
| 2023-07-25T16:35:21Z
| 2023-07-25T16:35:21Z
|
NONE
| null | null | null |
In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`.
In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2003/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3987
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3987/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3987/events
|
https://github.com/huggingface/datasets/pull/3987
| 1,176,481,659
|
PR_kwDODunzps40zAxF
| 3,987
|
Fix Faiss custom_index device
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-22T09:11:24Z
| 2022-03-24T12:18:59Z
| 2022-03-24T12:14:12Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"merged_at": "2022-03-24T12:14:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3987"
}
|
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored.
This PR fixes this by raising a ValueError if both arguments are passed.
Alternatively, the `custom_index` could be transferred to the target `device`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3987/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/320
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/320/comments
|
https://api.github.com/repos/huggingface/datasets/issues/320/events
|
https://github.com/huggingface/datasets/issues/320
| 647,188,167
|
MDU6SXNzdWU2NDcxODgxNjc=
| 320
|
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] | 2020-06-29T07:36:35Z
| 2020-06-29T14:44:42Z
| 2020-06-29T14:44:42Z
|
CONTRIBUTOR
| null | null | null |
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 172, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 132, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
```
@srush @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/320/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/690/events
|
https://github.com/huggingface/datasets/issues/690
| 712,150,321
|
MDU6SXNzdWU3MTIxNTAzMjE=
| 690
|
XNLI dataset: NonMatchingChecksumError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"events_url": "https://api.github.com/users/xiey1/events{/privacy}",
"followers_url": "https://api.github.com/users/xiey1/followers",
"following_url": "https://api.github.com/users/xiey1/following{/other_user}",
"gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xiey1",
"id": 13307358,
"login": "xiey1",
"node_id": "MDQ6VXNlcjEzMzA3MzU4",
"organizations_url": "https://api.github.com/users/xiey1/orgs",
"received_events_url": "https://api.github.com/users/xiey1/received_events",
"repos_url": "https://api.github.com/users/xiey1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiey1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xiey1"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release in the next few days to make the fix available for everyone.\r\nIn the meantime you can load `xnli` with\r\n```\r\nxnli = load_dataset('xnli', script_version=\"master\")\r\n```\r\nThis will use the latest version of the xnli script (available on master branch), instead of the old one.",
"That's awesome! Thanks a lot!"
] | 2020-09-30T17:50:03Z
| 2020-10-01T17:15:08Z
| 2020-10-01T14:01:14Z
|
NONE
| null | null | null |
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/690/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2615/events
|
https://github.com/huggingface/datasets/issues/2615
| 940,794,339
|
MDU6SXNzdWU5NDA3OTQzMzk=
| 2,615
|
Jsonlines export error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"Thanks for reporting @TevenLeScao! I'm having a look...",
"(not sure what just happened on the assignations sorry)",
"For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.",
"@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue?",
"@TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https://github.com/pandas-dev/pandas/pull/36898",
"Thanks ! I'm creating a PR",
"Well I though it was me who has taken on this issue... 😅 ",
"Sorry, I was also talking to teven offline so I already had the PR ready before noticing x)",
"I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he/she is still working on it before overtaking it... 😄 ",
"The fix is available on `master` @TevenLeScao , thanks for reporting"
] | 2021-07-09T14:02:05Z
| 2021-07-09T15:29:07Z
| 2021-07-09T15:28:33Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This what I'm running:
in python:
```
from datasets import load_dataset
ptb = load_dataset("ptb_text_only")
ptb["train"].to_json("ptb.jsonl")
```
then out of python:
```
head -10000 ptb.jsonl
```
## Expected results
Properly separated lines
## Actual results
The last line is a concatenation of two lines
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.1.dev0
- Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 4.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2615/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2615/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6180
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6180/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6180/events
|
https://github.com/huggingface/datasets/pull/6180
| 1,867,032,578
|
PR_kwDODunzps5Yy1r-
| 6,180
|
Use `hf-internal-testing` repos for hosting test dataset repos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006505 / 0.011353 (-0.004847) | 0.003950 / 0.011008 (-0.007058) | 0.084554 / 0.038508 (0.046046) | 0.074376 / 0.023109 (0.051267) | 0.350184 / 0.275898 (0.074286) | 0.380704 / 0.323480 (0.057224) | 0.004011 / 0.007986 (-0.003975) | 0.003890 / 0.004328 (-0.000438) | 0.065483 / 0.004250 (0.061232) | 0.054912 / 0.037052 (0.017860) | 0.359586 / 0.258489 (0.101097) | 0.403360 / 0.293841 (0.109519) | 0.030614 / 0.128546 (-0.097932) | 0.008530 / 0.075646 (-0.067117) | 0.288220 / 0.419271 (-0.131052) | 0.052270 / 0.043533 (0.008737) | 0.352557 / 0.255139 (0.097418) | 0.380509 / 0.283200 (0.097309) | 0.025513 / 0.141683 (-0.116170) | 1.488469 / 1.452155 (0.036315) | 1.559182 / 1.492716 (0.066466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266163 / 0.018006 (0.248157) | 0.596345 / 0.000490 (0.595855) | 0.004368 / 0.000200 (0.004168) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027137 / 0.037411 (-0.010274) | 0.082251 / 0.014526 (0.067725) | 0.094745 / 0.176557 (-0.081812) | 0.148756 / 0.737135 (-0.588379) | 0.094580 / 0.296338 (-0.201758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383506 / 0.215209 (0.168297) | 3.823147 / 2.077655 (1.745493) | 1.859627 / 1.504120 (0.355507) | 1.687969 / 1.541195 (0.146775) | 1.720786 / 1.468490 (0.252296) | 0.476552 / 4.584777 (-4.108225) | 3.539558 / 3.745712 (-0.206154) | 3.209032 / 5.269862 (-2.060830) | 1.999643 / 4.565676 (-2.566034) | 0.056484 / 0.424275 (-0.367791) | 0.007443 / 0.007607 (-0.000164) | 0.456089 / 0.226044 (0.230044) | 4.562522 / 2.268929 (2.293593) | 2.348286 / 55.444624 (-53.096338) | 1.984323 / 6.876477 (-4.892154) | 2.148988 / 2.142072 (0.006915) | 0.570761 / 4.805227 (-4.234466) | 0.131439 / 6.500664 (-6.369225) | 0.059752 / 0.075469 (-0.015717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276803 / 1.841788 (-0.564985) | 19.406812 / 8.074308 (11.332504) | 13.979088 / 10.191392 (3.787696) | 0.157418 / 0.680424 (-0.523006) | 0.018051 / 0.534201 (-0.516150) | 0.392307 / 0.579283 (-0.186976) | 0.406603 / 0.434364 (-0.027760) | 0.458450 / 0.540337 (-0.081888) | 0.622569 / 1.386936 (-0.764367) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006552 / 0.011353 (-0.004800) | 0.004060 / 0.011008 (-0.006948) | 0.063522 / 0.038508 (0.025014) | 0.072537 / 0.023109 (0.049428) | 0.398452 / 0.275898 (0.122554) | 0.422059 / 0.323480 (0.098579) | 0.005577 / 0.007986 (-0.002409) | 0.003413 / 0.004328 (-0.000916) | 0.064095 / 0.004250 (0.059845) | 0.056883 / 0.037052 (0.019831) | 0.407119 / 0.258489 (0.148630) | 0.435889 / 0.293841 (0.142048) | 0.031549 / 0.128546 (-0.096998) | 0.008418 / 0.075646 (-0.067228) | 0.070315 / 0.419271 (-0.348957) | 0.047828 / 0.043533 (0.004295) | 0.398705 / 0.255139 (0.143566) | 0.416986 / 0.283200 (0.133786) | 0.022304 / 0.141683 (-0.119379) | 1.512597 / 1.452155 (0.060442) | 1.570588 / 1.492716 (0.077871) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295100 / 0.018006 (0.277094) | 0.541883 / 0.000490 (0.541393) | 0.007375 / 0.000200 (0.007175) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030877 / 0.037411 (-0.006534) | 0.090807 / 0.014526 (0.076281) | 0.106155 / 0.176557 (-0.070402) | 0.155546 / 0.737135 (-0.581589) | 0.103847 / 0.296338 (-0.192492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441176 / 0.215209 (0.225967) | 4.401025 / 2.077655 (2.323371) | 2.394764 / 1.504120 (0.890644) | 2.226434 / 1.541195 (0.685239) | 2.247248 / 1.468490 (0.778758) | 0.489149 / 4.584777 (-4.095628) | 3.642468 / 3.745712 (-0.103244) | 3.235597 / 5.269862 (-2.034265) | 1.992660 / 4.565676 (-2.573016) | 0.057457 / 0.424275 (-0.366818) | 0.007192 / 0.007607 (-0.000415) | 0.515978 / 0.226044 (0.289934) | 5.147728 / 2.268929 (2.878800) | 2.837394 / 55.444624 (-52.607230) | 2.505753 / 6.876477 (-4.370723) | 2.653090 / 2.142072 (0.511018) | 0.583274 / 4.805227 (-4.221954) | 0.132116 / 6.500664 (-6.368548) | 0.058794 / 0.075469 (-0.016675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331630 / 1.841788 (-0.510158) | 20.056890 / 8.074308 (11.982582) | 14.950561 / 10.191392 (4.759169) | 0.165449 / 0.680424 (-0.514975) | 0.020161 / 0.534201 (-0.514040) | 0.395791 / 0.579283 (-0.183492) | 0.415631 / 0.434364 (-0.018733) | 0.474440 / 0.540337 (-0.065898) | 0.643060 / 1.386936 (-0.743876) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007440 / 0.011353 (-0.003913) | 0.004456 / 0.011008 (-0.006552) | 0.099498 / 0.038508 (0.060990) | 0.077579 / 0.023109 (0.054470) | 0.374934 / 0.275898 (0.099036) | 0.409590 / 0.323480 (0.086110) | 0.005876 / 0.007986 (-0.002110) | 0.003642 / 0.004328 (-0.000687) | 0.076781 / 0.004250 (0.072531) | 0.060185 / 0.037052 (0.023133) | 0.374762 / 0.258489 (0.116273) | 0.445608 / 0.293841 (0.151767) | 0.036557 / 0.128546 (-0.091990) | 0.009941 / 0.075646 (-0.065706) | 0.345214 / 0.419271 (-0.074058) | 0.061912 / 0.043533 (0.018379) | 0.378346 / 0.255139 (0.123207) | 0.415275 / 0.283200 (0.132076) | 0.027396 / 0.141683 (-0.114287) | 1.776602 / 1.452155 (0.324447) | 1.827683 / 1.492716 (0.334967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235227 / 0.018006 (0.217221) | 0.491846 / 0.000490 (0.491356) | 0.004987 / 0.000200 (0.004787) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032517 / 0.037411 (-0.004894) | 0.099217 / 0.014526 (0.084691) | 0.109749 / 0.176557 (-0.066807) | 0.176190 / 0.737135 (-0.560946) | 0.109868 / 0.296338 (-0.186471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455188 / 0.215209 (0.239979) | 4.560143 / 2.077655 (2.482489) | 2.249928 / 1.504120 (0.745809) | 2.032808 / 1.541195 (0.491614) | 2.090096 / 1.468490 (0.621605) | 0.567813 / 4.584777 (-4.016964) | 4.338299 / 3.745712 (0.592587) | 3.701589 / 5.269862 (-1.568273) | 2.404805 / 4.565676 (-2.160871) | 0.067931 / 0.424275 (-0.356344) | 0.009011 / 0.007607 (0.001404) | 0.542565 / 0.226044 (0.316521) | 5.406578 / 2.268929 (3.137650) | 2.773508 / 55.444624 (-52.671116) | 2.402926 / 6.876477 (-4.473550) | 2.679318 / 2.142072 (0.537246) | 0.683781 / 4.805227 (-4.121446) | 0.155970 / 6.500664 (-6.344694) | 0.070108 / 0.075469 (-0.005361) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541583 / 1.841788 (-0.300205) | 21.592562 / 8.074308 (13.518254) | 16.426868 / 10.191392 (6.235476) | 0.168618 / 0.680424 (-0.511806) | 0.021560 / 0.534201 (-0.512641) | 0.467062 / 0.579283 (-0.112221) | 0.479968 / 0.434364 (0.045604) | 0.540747 / 0.540337 (0.000410) | 0.775502 / 1.386936 (-0.611434) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008632 / 0.011353 (-0.002721) | 0.004523 / 0.011008 (-0.006485) | 0.075814 / 0.038508 (0.037306) | 0.087096 / 0.023109 (0.063987) | 0.482136 / 0.275898 (0.206238) | 0.529969 / 0.323480 (0.206489) | 0.006882 / 0.007986 (-0.001103) | 0.003720 / 0.004328 (-0.000609) | 0.076232 / 0.004250 (0.071981) | 0.069307 / 0.037052 (0.032254) | 0.491554 / 0.258489 (0.233065) | 0.528989 / 0.293841 (0.235148) | 0.042219 / 0.128546 (-0.086327) | 0.009717 / 0.075646 (-0.065929) | 0.103006 / 0.419271 (-0.316266) | 0.060377 / 0.043533 (0.016844) | 0.484454 / 0.255139 (0.229315) | 0.536072 / 0.283200 (0.252872) | 0.027482 / 0.141683 (-0.114201) | 1.844677 / 1.452155 (0.392522) | 2.001800 / 1.492716 (0.509083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252367 / 0.018006 (0.234361) | 0.483601 / 0.000490 (0.483111) | 0.007445 / 0.000200 (0.007245) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036463 / 0.037411 (-0.000948) | 0.108837 / 0.014526 (0.094311) | 0.122256 / 0.176557 (-0.054300) | 0.186455 / 0.737135 (-0.550681) | 0.122270 / 0.296338 (-0.174069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506291 / 0.215209 (0.291082) | 5.038044 / 2.077655 (2.960389) | 2.751017 / 1.504120 (1.246897) | 2.553655 / 1.541195 (1.012460) | 2.612724 / 1.468490 (1.144234) | 0.581755 / 4.584777 (-4.003022) | 4.376012 / 3.745712 (0.630300) | 3.749755 / 5.269862 (-1.520107) | 2.394059 / 4.565676 (-2.171618) | 0.068727 / 0.424275 (-0.355548) | 0.008714 / 0.007607 (0.001107) | 0.607371 / 0.226044 (0.381326) | 6.062053 / 2.268929 (3.793125) | 3.278378 / 55.444624 (-52.166247) | 2.866417 / 6.876477 (-4.010060) | 3.056150 / 2.142072 (0.914077) | 0.695090 / 4.805227 (-4.110137) | 0.155274 / 6.500664 (-6.345390) | 0.071106 / 0.075469 (-0.004363) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584552 / 1.841788 (-0.257236) | 23.092569 / 8.074308 (15.018260) | 17.381905 / 10.191392 (7.190513) | 0.206535 / 0.680424 (-0.473888) | 0.025401 / 0.534201 (-0.508800) | 0.514297 / 0.579283 (-0.064986) | 0.507487 / 0.434364 (0.073123) | 0.566883 / 0.540337 (0.026545) | 0.811074 / 1.386936 (-0.575862) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008400 / 0.011353 (-0.002953) | 0.004872 / 0.011008 (-0.006136) | 0.104434 / 0.038508 (0.065926) | 0.074411 / 0.023109 (0.051302) | 0.395970 / 0.275898 (0.120072) | 0.431661 / 0.323480 (0.108181) | 0.005365 / 0.007986 (-0.002621) | 0.005495 / 0.004328 (0.001167) | 0.081255 / 0.004250 (0.077004) | 0.057141 / 0.037052 (0.020089) | 0.397242 / 0.258489 (0.138753) | 0.456052 / 0.293841 (0.162211) | 0.048362 / 0.128546 (-0.080184) | 0.014077 / 0.075646 (-0.061569) | 0.351128 / 0.419271 (-0.068143) | 0.067842 / 0.043533 (0.024309) | 0.372820 / 0.255139 (0.117681) | 0.407917 / 0.283200 (0.124717) | 0.037707 / 0.141683 (-0.103976) | 1.677136 / 1.452155 (0.224981) | 1.764614 / 1.492716 (0.271897) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269850 / 0.018006 (0.251844) | 0.601458 / 0.000490 (0.600969) | 0.006500 / 0.000200 (0.006300) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030340 / 0.037411 (-0.007072) | 0.098041 / 0.014526 (0.083515) | 0.107270 / 0.176557 (-0.069287) | 0.173502 / 0.737135 (-0.563633) | 0.113296 / 0.296338 (-0.183043) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575788 / 0.215209 (0.360579) | 5.723878 / 2.077655 (3.646223) | 2.326339 / 1.504120 (0.822219) | 2.130667 / 1.541195 (0.589472) | 2.080885 / 1.468490 (0.612395) | 0.800936 / 4.584777 (-3.783841) | 5.227888 / 3.745712 (1.482176) | 4.592647 / 5.269862 (-0.677214) | 2.935765 / 4.565676 (-1.629911) | 0.095909 / 0.424275 (-0.328367) | 0.008763 / 0.007607 (0.001156) | 0.697362 / 0.226044 (0.471318) | 6.968105 / 2.268929 (4.699176) | 3.129070 / 55.444624 (-52.315554) | 2.554818 / 6.876477 (-4.321658) | 2.776005 / 2.142072 (0.633933) | 1.017064 / 4.805227 (-3.788163) | 0.211552 / 6.500664 (-6.289112) | 0.072132 / 0.075469 (-0.003338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517072 / 1.841788 (-0.324716) | 23.737742 / 8.074308 (15.663433) | 22.236447 / 10.191392 (12.045055) | 0.235408 / 0.680424 (-0.445016) | 0.031889 / 0.534201 (-0.502312) | 0.458997 / 0.579283 (-0.120286) | 0.610513 / 0.434364 (0.176149) | 0.536508 / 0.540337 (-0.003830) | 0.750137 / 1.386936 (-0.636799) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005374 / 0.011008 (-0.005634) | 0.077974 / 0.038508 (0.039466) | 0.083471 / 0.023109 (0.060362) | 0.498890 / 0.275898 (0.222992) | 0.517570 / 0.323480 (0.194090) | 0.006523 / 0.007986 (-0.001462) | 0.004315 / 0.004328 (-0.000013) | 0.082262 / 0.004250 (0.078012) | 0.064828 / 0.037052 (0.027776) | 0.473101 / 0.258489 (0.214612) | 0.534172 / 0.293841 (0.240331) | 0.051884 / 0.128546 (-0.076662) | 0.015191 / 0.075646 (-0.060455) | 0.084307 / 0.419271 (-0.334965) | 0.066050 / 0.043533 (0.022517) | 0.518007 / 0.255139 (0.262868) | 0.511145 / 0.283200 (0.227946) | 0.045302 / 0.141683 (-0.096381) | 1.670973 / 1.452155 (0.218818) | 1.829225 / 1.492716 (0.336509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.436537 / 0.018006 (0.418531) | 0.608380 / 0.000490 (0.607890) | 0.075211 / 0.000200 (0.075011) | 0.000733 / 0.000054 (0.000679) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039117 / 0.037411 (0.001706) | 0.103525 / 0.014526 (0.088999) | 0.124413 / 0.176557 (-0.052144) | 0.192352 / 0.737135 (-0.544783) | 0.120379 / 0.296338 (-0.175959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.673338 / 0.215209 (0.458129) | 6.799435 / 2.077655 (4.721780) | 3.600913 / 1.504120 (2.096793) | 2.881008 / 1.541195 (1.339814) | 2.667154 / 1.468490 (1.198664) | 0.868775 / 4.584777 (-3.716002) | 5.517063 / 3.745712 (1.771351) | 4.646706 / 5.269862 (-0.623156) | 2.914825 / 4.565676 (-1.650852) | 0.098784 / 0.424275 (-0.325491) | 0.011504 / 0.007607 (0.003897) | 0.724233 / 0.226044 (0.498188) | 7.311045 / 2.268929 (5.042117) | 3.685490 / 55.444624 (-51.759135) | 2.892360 / 6.876477 (-3.984117) | 3.253189 / 2.142072 (1.111117) | 0.983065 / 4.805227 (-3.822162) | 0.201097 / 6.500664 (-6.299567) | 0.068020 / 0.075469 (-0.007450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.793904 / 1.841788 (-0.047884) | 24.451356 / 8.074308 (16.377048) | 21.697191 / 10.191392 (11.505799) | 0.228545 / 0.680424 (-0.451879) | 0.034600 / 0.534201 (-0.499601) | 0.483253 / 0.579283 (-0.096030) | 0.615103 / 0.434364 (0.180739) | 0.564600 / 0.540337 (0.024262) | 0.799688 / 1.386936 (-0.587248) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-25T13:10:26Z
| 2023-08-25T16:58:02Z
| 2023-08-25T16:46:22Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"merged_at": "2023-08-25T16:46:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180"
}
|
Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6180/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3637
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3637/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3637/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3637/events
|
https://github.com/huggingface/datasets/issues/3637
| 1,115,526,438
|
I_kwDODunzps5CfZUm
| 3,637
|
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @lewtun!\r\n \r\nThis one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feature tpye of the `dialogue` field is `list`, which explains why you didn't get an error in earlier versions. Is there a specific reason why you use `list` instead of `Sequence` in the script? Maybe to avoid turning list of dicts to dicts of lists as it's done by `Sequence` for compatibility with TFDS or for performance reasons? If the field was `Sequence`, you would get an error in `encode_nested_example` because **the scripts yields some additional (nested) columns which are not specified in the `features` dictionary**. Previously, these additional columns would've been ignored by PyArrow (1), but now we have a check for them (2).\r\n(1) See PyArrow behavior:\r\n```\r\n>>> pa.array([{\"a\": 2, \"b\": 3}], type=pa.struct({\"a\": pa.int32()})) # pyarrow ignores the extra column\r\n-- is_valid: all not null\r\n-- child 0 type: int32\r\n [\r\n 2\r\n ]\r\n ```\r\n\r\n(2) Check:\r\nhttps://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/table.py#L1059\r\n\r\nThe fix is very simple: just add the missing columns to the _EMPTY_BELIEF_STATE list:\r\n```python\r\n_EMPTY_BELIEF_STATE.extend(['通用-产品类别', '火车-舱位档次', '通用-系列', '通用-价格区间', '通用-品牌'])\r\n```",
"Hey @mariosasko, thank you so much for figuring this one out - it certainly looks like a tricky bug 😱 ! I don't think there's a specific reason to use `list` instead of `Sequence` with the script, but I'll let the dataset creators know to see if your suggestion is acceptable.\r\n\r\nThank you again!",
"Thanks, this was indeed the fix! Would it make sense to produce a more informative error message in such cases? \r\n\r\nThe issue can be closed. \r\n\r\n"
] | 2022-01-26T21:38:02Z
| 2022-02-09T16:15:53Z
| 2022-02-09T16:15:53Z
|
MEMBER
| null | null | null |
## Describe the bug
I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master` too.
As far as I can tell, the dataset loading script is correct and the problematic features [here](https://huggingface.co/datasets/GEM/RiSAWOZ/blob/main/RiSAWOZ.py#L237) also look fine to me.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dset = load_dataset("GEM/RiSAWOZ")
```
## Expected results
I can load the dataset without error.
## Actual results
<details><summary>Traceback</summary>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1083 example = self.info.features.encode_example(record)
-> 1084 writer.write(example, key)
1085 finally:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size)
445
--> 446 self.write_examples_on_file()
447
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
403 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 404 self.write_batch(batch_examples=batch_examples)
405 self.current_examples = []
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 497 arrays.append(pa.array(typed_sequence))
498 inferred_features[col] = typed_sequence.get_inferred_type()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
204 # We only do it if trying_type is False - since this is what the user asks for.
--> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
206 return out
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1064 if isinstance(feature, list):
-> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0]))
1066 elif isinstance(feature, Sequence):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
-> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1088
TypeError: Couldn't cast array of type
struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string>
to
{'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)}
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_44306/2896005239.py in <module>
----> 1 dset = load_dataset("GEM/RiSAWOZ")
2 dset
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1692
1693 # Download and prepare data
-> 1694 builder_instance.download_and_prepare(
1695 download_config=download_config,
1696 download_mode=download_mode,
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
594 if not downloaded_from_gcs:
--> 595 self._download_and_prepare(
596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
597 )
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
682 try:
683 # Prepare split will record examples associated to the split
--> 684 self._prepare_split(split_generator, **prepare_split_kwargs)
685 except OSError as e:
686 raise OSError(
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1084 writer.write(example, key)
1085 finally:
-> 1086 num_examples, num_bytes = writer.finalize()
1087
1088 split_generator.split_info.num_examples = num_examples
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
525 # Re-intializing to empty list for next batch
526 self.hkey_record = []
--> 527 self.write_examples_on_file()
528 if self.pa_writer is None:
529 if self.schema:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
402 # Since current_examples contains (example, key) tuples
403 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 404 self.write_batch(batch_examples=batch_examples)
405 self.current_examples = []
406
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
495 col_try_type = try_features[col] if try_features is not None and col in try_features else None
496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 497 arrays.append(pa.array(typed_sequence))
498 inferred_features[col] = typed_sequence.get_inferred_type()
499 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
203 # Also, when trying type "string", we don't want to convert integers or floats to "string".
204 # We only do it if trying_type is False - since this is what the user asks for.
--> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
206 return out
207 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1063 # feature must be either [subfeature] or Sequence(subfeature)
1064 if isinstance(feature, list):
-> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0]))
1066 elif isinstance(feature, Sequence):
1067 if feature.length > -1:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1085 elif not isinstance(feature, (Sequence, dict, list, tuple)):
1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
-> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1088
1089
TypeError: Couldn't cast array of type
struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string>
to
{'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3637/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3637/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3201
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3201/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3201/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3201/events
|
https://github.com/huggingface/datasets/issues/3201
| 1,043,209,142
|
I_kwDODunzps4-Lhu2
| 3,201
|
Add GSM8K dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"Closed via https://github.com/huggingface/datasets/pull/4103"
] | 2021-11-03T08:36:44Z
| 2022-04-13T11:56:12Z
| 2022-04-13T11:56:11Z
|
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** GSM8K (short for Grade School Math 8k)
- **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers.
- **Paper:** https://openai.com/blog/grade-school-math/
- **Data:** https://github.com/openai/grade-school-math
- **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3201/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3201/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2189
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2189/events
|
https://github.com/huggingface/datasets/issues/2189
| 853,052,891
|
MDU6SXNzdWU4NTMwNTI4OTE=
| 2,189
|
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] | 2021-04-08T04:42:53Z
| 2022-06-01T16:32:15Z
| 2022-06-01T16:32:15Z
|
NONE
| null | null | null |
As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/124
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/124/comments
|
https://api.github.com/repos/huggingface/datasets/issues/124/events
|
https://github.com/huggingface/datasets/pull/124
| 618,864,284
|
MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx
| 124
|
Xsum, require manual download of some files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-15T10:26:13Z
| 2020-05-15T11:04:48Z
| 2020-05-15T11:04:46Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/124.diff",
"html_url": "https://github.com/huggingface/datasets/pull/124",
"merged_at": "2020-05-15T11:04:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/124.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/124"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/124/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/124/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/1439
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1439/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1439/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1439/events
|
https://github.com/huggingface/datasets/pull/1439
| 760,968,410
|
MDExOlB1bGxSZXF1ZXN0NTM1NzA4NDU1
| 1,439
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tuner007",
"id": 46425391,
"login": "tuner007",
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"repos_url": "https://api.github.com/users/tuner007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tuner007"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-10T06:57:01Z
| 2020-12-11T15:22:53Z
| 2020-12-11T15:22:53Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1439.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1439",
"merged_at": "2020-12-11T15:22:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1439.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1439"
}
|
1k-10k -> 1k-1M
3 separate configs are available with min. 1K and max. 211.3k examples
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1439/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1439/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6229
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6229/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6229/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6229/events
|
https://github.com/huggingface/datasets/issues/6229
| 1,889,050,954
|
I_kwDODunzps5wmKFK
| 6,229
|
Apply inference on all images in the dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal"
}
|
[] |
closed
| false
| null |
[] | null |
[
"From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object). ",
"> From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object).\r\n\r\nThanks @mariosasko for your reply...\r\ni tried :\r\n```\r\n# Define a function to apply the code to each image in the dataset\r\ndef process_image(image_path):\r\n print(\"Processing image:\", image_path)\r\n result = inferencer(image_path)['predictions']\r\n mask = np.where(result == 12, 255, 0).astype('uint8')\r\n return Image.fromarray(mask)\r\n\r\n# Process and save masks for each image in the dataset\r\nfor idx, example in enumerate(dataset['train']):\r\n image_path = np.array(example['image'])\r\n mask_image = process_image(image_path)\r\n mask_image.save(f\"mask_{idx}.png\")\r\n```\r\nand got\r\n```\r\nProcessing image: [[[202 165 87]\r\n [203 166 88]\r\n [207 168 91]\r\n ...\r\n [243 205 122]\r\n [244 202 120]\r\n [242 200 118]]\r\n\r\n [[202 165 87]\r\n [203 166 88]\r\n [207 168 91]\r\n ...\r\n [244 206 123]\r\n [245 203 121]\r\n [243 201 119]]\r\n\r\n [[203 164 87]\r\n [204 165 88]\r\n [207 168 91]\r\n ...\r\n [245 207 126]\r\n [246 204 122]\r\n [245 203 121]]\r\n\r\n ...\r\n\r\n [[154 123 56]\r\n [155 124 57]\r\n [158 125 56]\r\n ...\r\n [ 3 3 1]\r\n [ 3 3 1]\r\n [ 3 3 1]]\r\n\r\n [[154 123 56]\r\n [154 123 56]\r\n [155 124 57]\r\n ...\r\n [ 2 2 0]\r\n [ 2 2 0]\r\n [ 2 2 0]]\r\n\r\n [[152 121 54]\r\n [152 121 54]\r\n [153 122 55]\r\n ...\r\n [ 2 2 0]\r\n [ 2 2 0]\r\n [ 2 2 0]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[ 39 44 40]\r\n [ 39 44 40]\r\n [ 39 43 44]\r\n ...\r\n [187 185 164]\r\n [208 204 175]\r\n [203 198 166]]\r\n\r\n [[ 42 47 43]\r\n [ 40 45 41]\r\n [ 40 44 45]\r\n ...\r\n [188 186 165]\r\n [202 198 169]\r\n [201 196 164]]\r\n\r\n [[ 41 46 42]\r\n [ 39 44 40]\r\n [ 40 44 45]\r\n ...\r\n [187 184 165]\r\n [197 193 166]\r\n [201 196 166]]\r\n\r\n ...\r\n\r\n [[ 29 27 30]\r\n [ 28 26 29]\r\n [ 25 23 26]\r\n ...\r\n [ 48 33 28]\r\n [ 44 31 25]\r\n [ 39 26 20]]\r\n\r\n [[ 34 29 33]\r\n [ 32 27 31]\r\n [ 29 24 28]\r\n ...\r\n [ 30 17 11]\r\n [ 36 23 15]\r\n [ 41 28 20]]\r\n\r\n [[ 35 30 34]\r\n [ 33 28 32]\r\n [ 28 23 27]\r\n ...\r\n [ 28 15 9]\r\n [ 41 28 20]\r\n [ 46 33 25]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[ 65 53 55]\r\n [ 65 53 55]\r\n [ 51 39 41]\r\n ...\r\n [133 127 111]\r\n [150 141 124]\r\n [133 124 107]]\r\n\r\n [[ 58 45 52]\r\n [ 61 48 55]\r\n [ 51 38 45]\r\n ...\r\n [148 141 123]\r\n [178 169 152]\r\n [144 135 118]]\r\n\r\n [[ 79 66 83]\r\n [ 73 60 77]\r\n [ 65 51 66]\r\n ...\r\n [140 131 114]\r\n [142 133 116]\r\n [147 136 118]]\r\n\r\n ...\r\n\r\n [[132 122 133]\r\n [ 95 85 94]\r\n [ 61 51 60]\r\n ...\r\n [ 39 28 42]\r\n [ 46 36 45]\r\n [ 25 16 21]]\r\n\r\n [[150 143 151]\r\n [114 107 115]\r\n [ 64 54 63]\r\n ...\r\n [ 47 35 47]\r\n [ 38 27 35]\r\n [140 129 133]]\r\n\r\n [[145 138 146]\r\n [115 108 116]\r\n [ 69 59 67]\r\n ...\r\n [ 31 19 31]\r\n [128 117 123]\r\n [196 185 189]]]\r\nInference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ \r\nProcessing image: [[[159 151 140]\r\n [171 163 152]\r\n [161 148 142]\r\n ...\r\n [198 184 171]\r\n [189 175 162]\r\n [183 169 156]]\r\n\r\n [[128 118 106]\r\n [138 128 116]\r\n [138 125 116]\r\n ...\r\n [200 186 173]\r\n [190 176 163]\r\n [187 173 160]]\r\n\r\n [[165 153 137]\r\n [170 158 142]\r\n [174 162 148]\r\n ...\r\n [200 187 171]\r\n [188 175 159]\r\n [182 169 153]]\r\n```\r\nHowever , when trying to add to:\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('Andyrasika/cat_kingdom')\r\ndataset\r\n```\r\ni did \r\n```\r\nnew_column = [\"mask\"] * len(dataset[\"train\"])\r\nnew_column\r\ndataset = dataset.add_column(\"/workspace/data\", new_column)\r\n\r\nprint(dataset)\r\n```\r\ngot error:\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[11], line 3\r\n 1 new_column = [\"mask\"] * len(dataset[\"train\"])\r\n 2 new_column\r\n----> 3 dataset = dataset.add_column(\"/workspace/data\", new_column)\r\n 5 print(dataset)\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'add_column'\r\n```",
"https://github.com/huggingface/datasets/issues/6246 resolved the `add_column` error, so I'm closing this issue :) "
] | 2023-09-10T08:36:12Z
| 2023-09-20T16:11:53Z
| 2023-09-20T16:11:52Z
|
NONE
| null | null | null |
### Describe the bug
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[14], line 11
9 for idx, example in enumerate(dataset['train']):
10 image_path = example['image']
---> 11 mask_image = process_image(image_path)
12 mask_image.save(f"mask_{idx}.png")
Cell In[14], line 4, in process_image(image_path)
2 def process_image(image_path):
3 print("Processing image:", image_path)
----> 4 result = inferencer(image_path)['predictions']
5 mask = np.where(result == 12, 255, 0).astype('uint8')
6 return Image.fromarray(mask)
File /usr/local/lib/python3.10/dist-packages/mmseg/apis/mmseg_inferencer.py:183, in MMSegInferencer.__call__(self, inputs, return_datasamples, batch_size, show, wait_time, out_dir, img_out_dir, pred_out_dir, **kwargs)
180 pred_out_dir = ''
181 img_out_dir = ''
--> 183 return super().__call__(
184 inputs=inputs,
185 return_datasamples=return_datasamples,
186 batch_size=batch_size,
187 show=show,
188 wait_time=wait_time,
189 img_out_dir=img_out_dir,
190 pred_out_dir=pred_out_dir,
191 **kwargs)
File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:221, in BaseInferencer.__call__(self, inputs, return_datasamples, batch_size, **kwargs)
218 inputs = self.preprocess(
219 ori_inputs, batch_size=batch_size, **preprocess_kwargs)
220 preds = []
--> 221 for data in (track(inputs, description='Inference')
222 if self.show_progress else inputs):
223 preds.extend(self.forward(data, **forward_kwargs))
224 visualization = self.visualize(
225 ori_inputs, preds,
226 **visualize_kwargs) # type: ignore # noqa: E501
File /usr/local/lib/python3.10/dist-packages/rich/progress.py:168, in track(sequence, description, total, auto_refresh, console, transient, get_time, refresh_per_second, style, complete_style, finished_style, pulse_style, update_period, disable, show_speed)
157 progress = Progress(
158 *columns,
159 auto_refresh=auto_refresh,
(...)
164 disable=disable,
165 )
167 with progress:
--> 168 yield from progress.track(
169 sequence, total=total, description=description, update_period=update_period
170 )
File /usr/local/lib/python3.10/dist-packages/rich/progress.py:1210, in Progress.track(self, sequence, total, task_id, description, update_period)
1208 if self.live.auto_refresh:
1209 with _TrackThread(self, task_id, update_period) as track_thread:
-> 1210 for value in sequence:
1211 yield value
1212 track_thread.completed += 1
File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:291, in BaseInferencer.preprocess(self, inputs, batch_size, **kwargs)
266 """Process the inputs into a model-feedable format.
267
268 Customize your preprocess by overriding this method. Preprocess should
(...)
287 Any: Data processed by the ``pipeline`` and ``collate_fn``.
288 """
289 chunked_data = self._get_chunk_data(
290 map(self.pipeline, inputs), batch_size)
--> 291 yield from map(self.collate_fn, chunked_data)
File /usr/local/lib/python3.10/dist-packages/mmengine/infer/infer.py:588, in BaseInferencer._get_chunk_data(self, inputs, chunk_size)
586 chunk_data = []
587 for _ in range(chunk_size):
--> 588 processed_data = next(inputs_iter)
589 chunk_data.append(processed_data)
590 yield chunk_data
File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/base.py:12, in BaseTransform.__call__(self, results)
9 def __call__(self,
10 results: Dict) -> Optional[Union[Dict, Tuple[List, List]]]:
---> 12 return self.transform(results)
File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/wrappers.py:88, in Compose.transform(self, results)
79 """Call function to apply transforms sequentially.
80
81 Args:
(...)
85 dict or None: Transformed results.
86 """
87 for t in self.transforms:
---> 88 results = t(results) # type: ignore
89 if results is None:
90 return None
File /usr/local/lib/python3.10/dist-packages/mmcv/transforms/base.py:12, in BaseTransform.__call__(self, results)
9 def __call__(self,
10 results: Dict) -> Optional[Union[Dict, Tuple[List, List]]]:
---> 12 return self.transform(results)
File /usr/local/lib/python3.10/dist-packages/mmseg/datasets/transforms/loading.py:496, in InferencerLoader.transform(self, single_input)
494 inputs = single_input
495 else:
--> 496 raise NotImplementedError
498 if 'img' in inputs:
499 return self.from_ndarray(inputs)
NotImplementedError:
````
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('Andyrasika/cat_kingdom')
dataset
from mmseg.apis import MMSegInferencer
checkpoint_name = 'segformer_mit-b5_8xb2-160k_ade20k-640x640'
inferencer = MMSegInferencer(model=checkpoint_name)
# Define a function to apply the code to each image in the dataset
def process_image(image_path):
print("Processing image:", image_path)
result = inferencer(image_path)['predictions']
mask = np.where(result == 12, 255, 0).astype('uint8')
return Image.fromarray(mask)
# Process and save masks for each image in the dataset
for idx, example in enumerate(dataset['train']):
image_path = example['image']
mask_image = process_image(image_path)
mask_image.save(f"mask_{idx}.png")
```
### Expected behavior
create a separate column with masks in the dataset and further shows as a separate column in hub
### Environment info
jupyter notebook RTX 3090
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6229/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6229/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/253
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/253/comments
|
https://api.github.com/repos/huggingface/datasets/issues/253/events
|
https://github.com/huggingface/datasets/pull/253
| 634,791,939
|
MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz
| 253
|
add flue dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ",
"Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortunately, it seems that we've both been working on adding it to the repo...\r\nI was going to open a pull request before I came across yours.\r\nI didn't want to open a duplicate, that's why I'm commenting here (I hope it's not rude).\r\n\r\nWhen I look at your code there is one issue that jump out at me: for both `vsd` and `nsd`, the labels are missing. I believe this is more a data issue, as they were not kept in the cleaned dataframes of #223. I think the *word sense disambiguation* task was a bit misunderstood. \r\n\r\nMaybe you should directly use the data provided by FLUE for these ?",
"Hi @TheophileBlard thanks for pointing this out. I will give a look at it or maybe if you already done it you can update this PR. Also I haven't added yet the parsing datasets, I submited a request to get access to them. If you already have them, you can also add them.",
"Hi,\r\n\r\nAs @TheophileBlard pointed out, the labels for the vsd and nsd stains are missing.\r\n\r\nFor the wsd, it is my mistake, I added the files containing the labels on the drive.\r\nThere is still the join to do between the files that I didn't have time to do. It can be done after importing the two files, however if you wish to have a single dataframe already containing all the information, I could do it but only when I have free time because I have a lot of work at the moment at INSERM with the covid.\r\n\r\nFor the nsd, I've downloaded the files at https://zenodo.org/record/3549806, and if you do the same you'll see that they don't contain any labels.\r\nIn the files, you can see that some words have a WN code. I don't know what it corresponds to. On the FLUE github, they say to use the disambiguate tool (https://github.com/getalp/disambiguate) but I don't understand what he's doing.\r\n\r\n@mariamabarham for the parsing datasets, I have them in my possession. What it does that I haven't shared them is that they are licensed and you have to make a request to their creators. They give them away very easily for research purposes. For another use, you have to ask a commercial licence. All this means that if the data is freely available on your librairy, their licence and their application form are no longer of interest, which is why I did not add them.\r\nAfterwards, maybe the authors will change their policies and decide to make the data freely available through your librairy",
"@mariamabarham @lbourdois, Yea I don't think we can had the parsing datasets without asking the authors permission first. I also hope they'll change their policy.\r\n\r\nRegarding `vsd` and `nsd`, if I understand well the task, the labels are \"word senses\" and the goal is to find the correct word sense for each ambiguous word. For `vsd` there is one ambiguous verb per sentence, and the labels we manually annotated with \"wiktionary senses\". For `nsd`, there are multiple ambiguous word per sentence, and the labels are WordNet Princeton Identifiers (hence the WN tag). This dataset was translated in french & automatically aligned.\r\n\r\nImo, for these 2 datasets, each example should be made of:\r\n- a list of string tokens (the words of the sentence)\r\n- a list of string labels (the word senses or 'O' when the word is not ambiguous.\r\n\r\nIn fact, for `vsd` it could be even simpler, with a single string label (as there is only one ambiguous verb), + some \"idx\" feature to indicate the location of the ambiguous verb.\r\n\r\nUnfortunately, I cannot update your PR as I'm not a maintainer of the project. Maybe we could work together on a fork ? Here's [mine](https://github.com/TheophileBlard/nlp/commits/flue-benchmark).\r\n",
"Hi\r\n\r\nAny news about this PR ?\r\nBecause thinking back FLUE basically offers only two new datasets : those for the Word Sense Disambiguation task (vsd and nsd).\r\n\r\nWouldn't it be more clever to make separate PRs to add the datasets of the other tasks which are multi-lingual (and therefore can be used for other languages) ?\r\n\r\nXNLI being already present on your library, there would only be PAWS-X (datasets and bibtex available here : https://github.com/google-research-datasets/paws/tree/master/pawsx) and the Webis-CLS-10 dataset (dataset : https://zenodo.org/record/3251672#.XvCXN-d8taQ and bibtex : https://zenodo.org/record/3251672/export/hx#.XvCXZ-d8taQ) to do.\r\n\r\nAnd next for the FLUE benchmark, all you would have to do would be to use your own library by making an nlp.load_dataset() (for example nlp.load_dataset('xnli') which is already present in your library) for each of the datasets of the benchmark tasks and to keep only the 'fr' data.\r\n\r\n\r\n\r\nAlso @mariamabarham , did you get any feedback for the parsing task dataset request?\r\nIn case of refusal from the authors, there are other datasets in French to perform this task and in this case, I would open a new topic\r\n",
"Hi @lbourdois ,\r\nPAWS-X is also present in the lib, it's part of `xtreme` dataset, so it can be loaded by `nlp.load_dataset('xtreme', 'PAWS-X.fr')` for the french version.\r\nI think the parsing and the Word Sense Disambiguation task datasets are the only missing in the lib now. \r\nI did not get a feedback yet for the parsing dataset.\r\n",
"By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.",
"> By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.\r\n\r\nYea sorry, missed that! I think @lbourdois has a point, it helps no one to have the same dataset in multiple places. I will try to find some time to adapt the code of my fork and open PRs for `Webis-CLS-10` and `nsd`/`vsd`. Maybe we should group `nsd`/`vsd` together ?",
"Shall we close this PR then ? @mariamabarham @TheophileBlard @lbourdois "
] | 2020-06-08T17:11:09Z
| 2023-09-24T09:46:03Z
| 2020-07-16T07:50:59Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/253",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/253"
}
|
This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/253/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5233
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5233/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5233/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5233/events
|
https://github.com/huggingface/datasets/pull/5233
| 1,447,906,868
|
PR_kwDODunzps5C1JVh
| 5,233
|
Fix shards in IterableDataset.from_generator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T11:42:09Z
| 2022-11-14T14:16:03Z
| 2022-11-14T14:13:22Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5233",
"merged_at": "2022-11-14T14:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5233"
}
|
Allow to define a sharded iterable dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5233/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5233/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3857
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3857/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3857/events
|
https://github.com/huggingface/datasets/issues/3857
| 1,162,525,353
|
I_kwDODunzps5FSrqp
| 3,857
|
Order of dataset changes due to glob.glob.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
open
| false
| null |
[] | null |
[
"I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`"
] | 2022-03-08T11:10:30Z
| 2022-03-14T11:08:22Z
| null |
MEMBER
| null | null | null |
## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3857/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2145
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2145/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2145/events
|
https://github.com/huggingface/datasets/pull/2145
| 844,603,518
|
MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2
| 2,145
|
Implement Dataset add_column
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] |
{
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
}
|
[
"#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)"
] | 2021-03-30T14:02:14Z
| 2021-04-29T14:50:44Z
| 2021-04-29T14:50:43Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2145"
}
|
Implement `Dataset.add_column`.
Close #1954.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2145/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6267
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6267/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6267/events
|
https://github.com/huggingface/datasets/issues/6267
| 1,916,443,262
|
I_kwDODunzps5yOpp-
| 6,267
|
Multi label class encoding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n 'text': ['one', 'two', 'three', 'four'],\r\n 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]\r\n}\r\n\r\ndataset = Dataset.from_dict(data)\r\ndataset = dataset.cast_column('labels', Sequence(ClassLabel(names=[\"a\", \"b\", \"c\", \"d\"])))\r\n```",
"Great! Can you elaborate on \"class_encode_column does support nested fields\"? Do you mean that there is a way to `class_encode_column` on a Sequence?",
"Yes, exactly! This would be a nice contribution, though.",
"Sorry, I'm still not following. Are you saying that there currently exists a way to call `class_encode_column` on a `Sequence(ClassLabel)` type? Or that the underlying data structures support it and a contribution of a method to do that would be welcome?",
"`class_encode_column ` currently does not support `Sequence(ClassLabel)`. Implementing support for this would be a nice contribution.\r\n\r\nIn the meantime, this limitation can be circumvented by fetching (unique) labels and calling `.cast_column(col, Sequence(ClassLabel(names=labels)))`.",
"Ok makes sense, can you take a look at the POC implementation I did [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e)? Happy to take another pass / submit as a PR but would be helpful if I got a thumbs up that this was directionally correct with respect to implementation / architecture. ",
"There is no need to introduce a new type (`MultiLabel`) for this feature. Also, I think we can keep the logic inside a single method instead of separating the two cases.\r\n\r\nMaybe https://github.com/huggingface/datasets/pull/4277 can help with the implementation. We extended `align_labels_with_mapping` to support `Sequence(ClassLabel(...))` in that PR (initially, it only worked with `ClassLabel(...)`)"
] | 2023-09-27T22:48:08Z
| 2023-10-26T18:46:08Z
| null |
NONE
| null | null | null |
### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
'text': ['one', 'two', 'three', 'four'],
'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]
}
dataset = Dataset.from_dict(data)
dataset = dataset.class_encode_column('labels')
```
I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow.
I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected.
After digging more I did notice a few issues
- After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this.
- I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior.
### Motivation
See above - would like to support multi label class encodings.
### Your contribution
This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6267/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5162
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5162/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5162/events
|
https://github.com/huggingface/datasets/issues/5162
| 1,422,461,112
|
I_kwDODunzps5UyQi4
| 5,162
|
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8604946?v=4",
"events_url": "https://api.github.com/users/Rijgersberg/events{/privacy}",
"followers_url": "https://api.github.com/users/Rijgersberg/followers",
"following_url": "https://api.github.com/users/Rijgersberg/following{/other_user}",
"gists_url": "https://api.github.com/users/Rijgersberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rijgersberg",
"id": 8604946,
"login": "Rijgersberg",
"node_id": "MDQ6VXNlcjg2MDQ5NDY=",
"organizations_url": "https://api.github.com/users/Rijgersberg/orgs",
"received_events_url": "https://api.github.com/users/Rijgersberg/received_events",
"repos_url": "https://api.github.com/users/Rijgersberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rijgersberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rijgersberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rijgersberg"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @Rijgersberg.\r\n\r\nWe were waiting for the release of `dill` 0.3.6, that happened 2 days ago (24 Oct 2022): https://github.com/uqfoundation/dill/releases/tag/dill-0.3.6\r\n- See comment: https://github.com/huggingface/datasets/pull/4397#discussion_r880629543\r\n\r\nAlso `multiprocess` 0.70.14 was released 2 days ago: https://github.com/uqfoundation/multiprocess/releases/tag/multiprocess-0.70.14\r\n\r\nWe are addressing this issue to align dependencies.",
"In your specific setup, I guess the compatible configuration is with `multiprocess` 0.70.13 (instead of 0.70.14).",
"@Rijgersberg this issue is fixed. It will be available in our next `datasets` release.",
"Thanks!",
"> @Rijgersberg this issue is fixed. It will be available in our next `datasets` release.\n\nAny chance you have a eta? ",
"@StefanSamba we are disussing about making a release early this week.",
"@Rijgersberg, please also that you can make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version. "
] | 2022-10-25T13:23:50Z
| 2022-11-14T08:25:37Z
| 2022-10-28T05:38:15Z
|
NONE
| null | null | null |
### Describe the bug
When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears.
It is caused by a transitive dependency conflict between `datasets` and `multiprocess`.
### Steps to reproduce the bug
```bash
$ echo "datasets" > requirements.in
$ pip install pip-tools
$ pip-compile requirements.in
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
A correctly generated file `requirements.txt` with pinned dependencies
### Environment info
Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1).
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5162/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4928/events
|
https://github.com/huggingface/datasets/pull/4928
| 1,360,941,172
|
PR_kwDODunzps4-Ubi4
| 4,928
|
Add ability to read-write to SQL databases.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.",
"wow this is super cool!",
"@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r\n\r\n```\r\nif not self._is_valid_token(token):\r\n> raise ValueError(\"Invalid token passed!\")\r\nE ValueError: Invalid token passed!\r\n```",
"I just relaunched the tests, it should be fixed now",
"Thanks a lot for working on this!\r\n\r\nI have some concerns with the current design:\r\n* Besides SQLite, the loader should also work with the other engines supported by SQLAlchemy. (A better name for it in the current state would be `sqlite` :))\r\n* It should support arbitrary queries/table names - only the latter currently works.\r\n* Exposing this loader as a packaged builder (`load_dataset(\"sql\", ...)`) is not a good idea for the following reasons:\r\n * Considering the scenario where a table with the same name is present in multiple files is very unlikely, the data files resolution is not needed here. And if we remove that, what the name of the default split should be? \"train\"?\r\n * `load_dataset(\"sql\", ...)` also implies that streaming should work, but that's not the case. And I don't think we can change that, considering how hard it is to make SQLite files streamable.\r\n\r\nAll this makes me think we shouldn't expose this builder as a packaged module and, instead, limit the API to `Dataset.from_sql`/`Dataset.to_sql` (with the signatures matching the ones in pandas as much as possible; regarding this, note that SQLAlchemy connections are not hashable/picklable, which is required for caching, but I think it's OK only to allow URI strings as connections to bypass that (Dask has the same limitation).\r\n\r\nWDYT?",
"Hi @mariosasko thank you for your review.\r\n\r\nI agree that `load_dataset('sql',...)` is a bit weird and I would be happy to remove it. To be honest, I only added it when I saw that it was the preferred way in `loading.mdx`. \r\n\r\nI agree that the `SELECT` should be a parameters as well. I'll add it.\r\n\r\nSo far, only `Dataset.to_sql` explicitly supports any SQLAlchemy Connexion, I'm pretty sure that `Dataset.from_sql` would work with a Connexion as well, but it would break the typing from the parent class which is `path_or_paths: NestedDataStructureLike[PathLike]`. I would prefer not to break this API Contract.\r\n\r\n\r\nI will have time to work on this over the weekend. Please let me know what you think if I do the following:\r\n* Remove `load_dataset('sql', ...)` and edit the documentation to use `to_sql, from_sql`.\r\n* Tentatively make `Dataset.from_sql` typing work with SQLAlchemy Connexion.\r\n* Add support for custom queries (Default would be `SELECT * FROM {table_name}`).\r\n\r\nCheers!",
"Perhaps after we merge https://github.com/huggingface/datasets/pull/4957 (**Done!**), you can subclass `AbstractDatasetInputStream` instead of `AbstractDatasetReader` to not break the contract with the connection object. Also, let's avoid having the default value for the query/table (you can set it to `None` in the builder and raise an error in the builder config's `__post_init__` if it's not provided). Other than that, sounds good!",
"@Dref360 I've made final changes/refinements to align the SQL API with Pandas/Dask. Let me know what you think.\r\n",
"Thank you so much! I was missing a lot of things sorry about that.\r\nLGTM",
"I think we can merge if the tests pass. \r\n\r\nOne last thing I would like to get your opinion on - currently, if SQLAlchemy is not installed, the missing dependency error will be thrown inside `pandas.read_sql`. Do you think we should be the ones throwing this error, e.g. after the imports in `packaged_modules/sql/sql.py` if `SQLALCHEMY_AVAILABLE` is `False` (note that this would mean making `sqlalchemy` a required dependency for the docs to be able to add `SqlConfig` to the package reference)?",
"> One last thing I would like to get your opinion on - currently, if SQLAlchemy is not installed, the missing dependency error will be thrown inside pandas.read_sql\r\n\r\nIs sqlalchemy always required for pd.read_sql ? If so, I think we can raise the error on our side.\r\nBut sqlalchemy should still be an optional dependency for `datasets` IMO",
"@lhoestq \r\n> Is sqlalchemy always required for pd.read_sql ? If so, I think we can raise the error on our side.\r\n\r\nIn our case, it's always required as we only support database URIs.\r\n\r\n> But sqlalchemy should still be an optional dependency for datasets IMO\r\n\r\nYes, it will remain optional for datasets but will be required for building the docs (as is`s3fs`, for instance). ",
"Ok I see ! Sounds good :)"
] | 2022-09-03T19:09:08Z
| 2022-10-03T16:34:36Z
| 2022-10-03T16:32:28Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"merged_at": "2022-10-03T16:32:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928"
}
|
Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4928/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1151
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1151/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1151/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1151/events
|
https://github.com/huggingface/datasets/pull/1151
| 757,517,092
|
MDExOlB1bGxSZXF1ZXN0NTMyODc5ODk4
| 1,151
|
adding psc dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1654113?v=4",
"events_url": "https://api.github.com/users/abecadel/events{/privacy}",
"followers_url": "https://api.github.com/users/abecadel/followers",
"following_url": "https://api.github.com/users/abecadel/following{/other_user}",
"gists_url": "https://api.github.com/users/abecadel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abecadel",
"id": 1654113,
"login": "abecadel",
"node_id": "MDQ6VXNlcjE2NTQxMTM=",
"organizations_url": "https://api.github.com/users/abecadel/orgs",
"received_events_url": "https://api.github.com/users/abecadel/received_events",
"repos_url": "https://api.github.com/users/abecadel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abecadel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abecadel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abecadel"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-05T02:40:01Z
| 2020-12-09T11:38:41Z
| 2020-12-09T11:38:41Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1151",
"merged_at": "2020-12-09T11:38:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1151"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1151/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1151/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2688
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2688/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2688/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2688/events
|
https://github.com/huggingface/datasets/issues/2688
| 949,182,074
|
MDU6SXNzdWU5NDkxODIwNzQ=
| 2,688
|
hebrew language codes he and iw should be treated as aliases
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4",
"events_url": "https://api.github.com/users/eyaler/events{/privacy}",
"followers_url": "https://api.github.com/users/eyaler/followers",
"following_url": "https://api.github.com/users/eyaler/following{/other_user}",
"gists_url": "https://api.github.com/users/eyaler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eyaler",
"id": 4436747,
"login": "eyaler",
"node_id": "MDQ6VXNlcjQ0MzY3NDc=",
"organizations_url": "https://api.github.com/users/eyaler/orgs",
"received_events_url": "https://api.github.com/users/eyaler/received_events",
"repos_url": "https://api.github.com/users/eyaler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eyaler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyaler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eyaler"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datasets/catalog/c4).",
"For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https://github.com/huggingface/datasets/commit/38288087b1b02f97586e0346e8f28f4960f1fd37\r\n\r\nOnce the website is updated, mC4 will be listed in https://huggingface.co/datasets?filter=languages:he\r\n\r\n"
] | 2021-07-20T23:13:52Z
| 2021-07-21T16:34:53Z
| 2021-07-21T16:34:53Z
|
NONE
| null | null | null |
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2688/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2688/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2089
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2089/events
|
https://github.com/huggingface/datasets/issues/2089
| 836,788,019
|
MDU6SXNzdWU4MzY3ODgwMTk=
| 2,089
|
Add documentaton for dataset README.md files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)",
"@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.",
"We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them",
"@lhoestq what is the status on this? Did you add documentation?",
"Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources",
"@lhoestq is there something like this form Models?",
"I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this",
"When modifying a README file, the Hub now displays a special UI with allowed values (see https://huggingface.co/docs/datasets/main/en/upload_dataset#create-a-dataset-card)."
] | 2021-03-20T11:44:38Z
| 2023-07-25T16:45:38Z
| 2023-07-25T16:45:37Z
|
CONTRIBUTOR
| null | null | null |
Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.