url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/16971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16971/comments
https://api.github.com/repos/huggingface/transformers/issues/16971/events
https://github.com/huggingface/transformers/issues/16971
1,217,721,631
I_kwDOCUB6oc5IlPUf
16,971
AttributeError: 'DataParallel' object has no attribute 'save_pretrained'
{ "login": "bilalghanem", "id": 47889448, "node_id": "MDQ6VXNlcjQ3ODg5NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/47889448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilalghanem", "html_url": "https://github.com/bilalghanem", "followers_url": "https://api.github.com/users/bilalghanem/followers", "following_url": "https://api.github.com/users/bilalghanem/following{/other_user}", "gists_url": "https://api.github.com/users/bilalghanem/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilalghanem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilalghanem/subscriptions", "organizations_url": "https://api.github.com/users/bilalghanem/orgs", "repos_url": "https://api.github.com/users/bilalghanem/repos", "events_url": "https://api.github.com/users/bilalghanem/events{/privacy}", "received_events_url": "https://api.github.com/users/bilalghanem/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "`DataParallel` wraps the model. To access the underlying module, you can use the `module` attribute:\r\n```py\r\n>>> from torch.nn import DataParallel\r\n>>> model = nn.DataParallel(model)\r\n>>> model.module.save_pretrained(<directory>)\r\n```", "> `DataParallel` wraps the model. To access the underlying module, you can use the `module` attribute:\r\n> \r\n> ```python\r\n> >>> from torch.nn import DataParallel\r\n> >>> model = nn.DataParallel(model)\r\n> >>> model.module.save_pretrained(<directory>)\r\n> ```\r\n\r\nThanks @LysandreJik! " ]
1,651
1,651
1,651
NONE
null
### System Info ```shell torch==1.10.2+cu113 transformers==4.18.0 Python 3.6.9 Linux "18.04.6 LTS (Bionic Beaver)" ``` I am training a T5 transformer (T5ForConditionalGeneration.from_pretrained(model_params["MODEL"])) to generate text. The model works well when I train it on a single GPU. But when I want to parallel the data across several GPUs by doing `model = nn.DataParallel(model)`, I can't save the model. The error is: > File "run.py", line 288, in T5Trainer > model.save_pretrained(path) > File "/home/USER_NAME/venv/pt_110/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1178, in __getattr__ > type(self).__name__, name)) > AttributeError: 'DataParallel' object has no attribute 'save_pretrained' ### Reproduction Wrap the model with `model = nn.DataParallel(model)`. ### Expected behavior ```shell The model should be saved without any issues. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16971/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16971/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16970/comments
https://api.github.com/repos/huggingface/transformers/issues/16970/events
https://github.com/huggingface/transformers/pull/16970
1,217,708,582
PR_kwDOCUB6oc425Ki1
16,970
Fix check_all_models_are_tested
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
COLLABORATOR
null
# What does this PR do? The block from L396 to L398 should be in the `else` block if I understand correctly. https://github.com/huggingface/transformers/blob/8d3f952adb8c98cec2ea1f59bb7acfbc08232381/utils/check_repo.py#L394-L398 Otherwise, when a model has no test file, I get errors like below, and the program stops immediately. (with `test_file = []` passed to `check_models_are_tested`) ```python File "/home/yih_dar_huggingface_co/transformers/utils/check_repo.py", line 362, in check_models_are_tested tested_models = find_tested_models(test_file) File "/home/yih_dar_huggingface_co/transformers/utils/check_repo.py", line 343, in find_tested_models with open(os.path.join(PATH_TO_TESTS, test_file), "r", encoding="utf-8", newline="\n") as f: File "/usr/lib/python3.9/posixpath.py", line 90, in join genericpath._check_arg_types('join', a, *p) File "/usr/lib/python3.9/genericpath.py", line 152, in _check_arg_types raise TypeError(f'{funcname}() argument must be str, bytes, or ' TypeError: join() argument must be str, bytes, or os.PathLike object, not 'list' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16970/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16970", "html_url": "https://github.com/huggingface/transformers/pull/16970", "diff_url": "https://github.com/huggingface/transformers/pull/16970.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16970.patch", "merged_at": 1651087109000 }
https://api.github.com/repos/huggingface/transformers/issues/16969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16969/comments
https://api.github.com/repos/huggingface/transformers/issues/16969/events
https://github.com/huggingface/transformers/pull/16969
1,217,668,845
PR_kwDOCUB6oc425B_j
16,969
Fix doc notebooks links
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
COLLABORATOR
null
# What does this PR do? Notebooks for the documentation have moved under the `en` folder, this PR fixes all the links we have.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16969/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16969", "html_url": "https://github.com/huggingface/transformers/pull/16969", "diff_url": "https://github.com/huggingface/transformers/pull/16969.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16969.patch", "merged_at": 1651085993000 }
https://api.github.com/repos/huggingface/transformers/issues/16968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16968/comments
https://api.github.com/repos/huggingface/transformers/issues/16968/events
https://github.com/huggingface/transformers/pull/16968
1,217,550,926
PR_kwDOCUB6oc424qN2
16,968
Fixup no_trainer save logic
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053813, "node_id": "MDU6TGFiZWwxODM0MDUzODEz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch", "name": "PyTorch", "color": "a12bef", "default": false, "description": "Anything PyTorch" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
CONTRIBUTOR
null
# Fix save logic in all `no_trainer` examples ## What does this add? This PR fixes a bug pointed out in https://github.com/huggingface/accelerate/issues/322, where the save and load logic was wrong in how it skipped over the steps in the training loop. This PR fixes it and changes the internals slightly to let saveing of a checkpoint be named right (before it always started at `epoch_0`, even if we resumed from epoch 1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16968/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16968", "html_url": "https://github.com/huggingface/transformers/pull/16968", "diff_url": "https://github.com/huggingface/transformers/pull/16968.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16968.patch", "merged_at": 1651085209000 }
https://api.github.com/repos/huggingface/transformers/issues/16967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16967/comments
https://api.github.com/repos/huggingface/transformers/issues/16967/events
https://github.com/huggingface/transformers/issues/16967
1,217,488,489
I_kwDOCUB6oc5IkWZp
16,967
cannot import name 'RegNetModel' from 'transformers'
{ "login": "MorningStarOvO", "id": 35036784, "node_id": "MDQ6VXNlcjM1MDM2Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/35036784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MorningStarOvO", "html_url": "https://github.com/MorningStarOvO", "followers_url": "https://api.github.com/users/MorningStarOvO/followers", "following_url": "https://api.github.com/users/MorningStarOvO/following{/other_user}", "gists_url": "https://api.github.com/users/MorningStarOvO/gists{/gist_id}", "starred_url": "https://api.github.com/users/MorningStarOvO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MorningStarOvO/subscriptions", "organizations_url": "https://api.github.com/users/MorningStarOvO/orgs", "repos_url": "https://api.github.com/users/MorningStarOvO/repos", "events_url": "https://api.github.com/users/MorningStarOvO/events{/privacy}", "received_events_url": "https://api.github.com/users/MorningStarOvO/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "RegNet is currently only available from the main branch, it will be included it in the next release. You can install it as follows:\r\n\r\n`pip install git+https://github.com/huggingface/transformers.git`\r\n\r\n" ]
1,651
1,652
1,652
NONE
null
### System Info ```shell python 3.8 transformers 4.18.0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import RegNetModel ### Expected behavior ```shell how to import RegNetModel ? ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16967/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16966/comments
https://api.github.com/repos/huggingface/transformers/issues/16966/events
https://github.com/huggingface/transformers/pull/16966
1,217,483,649
PR_kwDOCUB6oc424bze
16,966
Fix add-new-model-like when model doesn't support all frameworks
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
COLLABORATOR
null
# What does this PR do? This fixes the `transformers-cli add-new-model-like` command when the model used as a model is not implemented in all frameworks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16966/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16966", "html_url": "https://github.com/huggingface/transformers/pull/16966", "diff_url": "https://github.com/huggingface/transformers/pull/16966.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16966.patch", "merged_at": 1651072526000 }
https://api.github.com/repos/huggingface/transformers/issues/16965
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16965/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16965/comments
https://api.github.com/repos/huggingface/transformers/issues/16965/events
https://github.com/huggingface/transformers/pull/16965
1,217,448,933
PR_kwDOCUB6oc424UX4
16,965
Move test model folders new
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,651
1,651
1,651
COLLABORATOR
null
# What does this PR do?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16965/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16965", "html_url": "https://github.com/huggingface/transformers/pull/16965", "diff_url": "https://github.com/huggingface/transformers/pull/16965.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16965.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16964/comments
https://api.github.com/repos/huggingface/transformers/issues/16964/events
https://github.com/huggingface/transformers/pull/16964
1,217,446,178
PR_kwDOCUB6oc424Txu
16,964
Update custom_models.mdx
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
CONTRIBUTOR
null
BertModelForSequenceClassification -> [BertForSequenceClassification](https://huggingface.co/docs/transformers/main/en/model_doc/bert#transformers.BertForSequenceClassification) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16964/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16964", "html_url": "https://github.com/huggingface/transformers/pull/16964", "diff_url": "https://github.com/huggingface/transformers/pull/16964.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16964.patch", "merged_at": 1651070815000 }
https://api.github.com/repos/huggingface/transformers/issues/16963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16963/comments
https://api.github.com/repos/huggingface/transformers/issues/16963/events
https://github.com/huggingface/transformers/pull/16963
1,217,386,742
PR_kwDOCUB6oc424G4R
16,963
Fix `distributed_concat` with scalar tensor
{ "login": "Yard1", "id": 10364161, "node_id": "MDQ6VXNlcjEwMzY0MTYx", "avatar_url": "https://avatars.githubusercontent.com/u/10364161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yard1", "html_url": "https://github.com/Yard1", "followers_url": "https://api.github.com/users/Yard1/followers", "following_url": "https://api.github.com/users/Yard1/following{/other_user}", "gists_url": "https://api.github.com/users/Yard1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yard1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yard1/subscriptions", "organizations_url": "https://api.github.com/users/Yard1/orgs", "repos_url": "https://api.github.com/users/Yard1/repos", "events_url": "https://api.github.com/users/Yard1/events{/privacy}", "received_events_url": "https://api.github.com/users/Yard1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> If a scalar tensor is passed to `distributed_concat`, the output tensors are correctly converted to one element vectors. However, this is not done for the tensor itself, which causes an exception to be thrown in `dist.all_gather` due to a tensor length mismatch. This PR fixes that. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16963/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16963", "html_url": "https://github.com/huggingface/transformers/pull/16963", "diff_url": "https://github.com/huggingface/transformers/pull/16963.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16963.patch", "merged_at": 1651069582000 }
https://api.github.com/repos/huggingface/transformers/issues/16962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16962/comments
https://api.github.com/repos/huggingface/transformers/issues/16962/events
https://github.com/huggingface/transformers/issues/16962
1,217,329,497
I_kwDOCUB6oc5IjvlZ
16,962
Can't reproduce training of wav2vec2-large from documentation
{ "login": "HLasse", "id": 23191638, "node_id": "MDQ6VXNlcjIzMTkxNjM4", "avatar_url": "https://avatars.githubusercontent.com/u/23191638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HLasse", "html_url": "https://github.com/HLasse", "followers_url": "https://api.github.com/users/HLasse/followers", "following_url": "https://api.github.com/users/HLasse/following{/other_user}", "gists_url": "https://api.github.com/users/HLasse/gists{/gist_id}", "starred_url": "https://api.github.com/users/HLasse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HLasse/subscriptions", "organizations_url": "https://api.github.com/users/HLasse/orgs", "repos_url": "https://api.github.com/users/HLasse/repos", "events_url": "https://api.github.com/users/HLasse/events{/privacy}", "received_events_url": "https://api.github.com/users/HLasse/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @HLasse,\r\n\r\nCould you increase this parameter: https://huggingface.co/facebook/wav2vec2-large-lv60/blob/main/config.json#L62 to `0.5` and see if it works then? It seems like given the sequence length you are not sampling enough negative targets.\r\n\r\nAlso it'll be really hard / impossible to do a full pretraining on a single T4 GPU", "That works, thanks! \r\n\r\n> Also it'll be really hard / impossible to do a full pretraining on a single T4 GPU\r\n\r\nI know - this was mainly to get an estimate of training time on different hardware setups. Danish wav2vec models coming up soon! :) ", "Hi. I encountered exactly the same issue.\r\nI'm using the Wav2Vec2ConformerForPreTraining model 'facebook/wav2vec2-conformer-rope-large', training on a single NVIDIA TITAN Xp with a very small speech dataset(pilot).\r\n\r\nI've already changed the mask_time_prob, but it didn't work for me. \r\nThe error message I got was the same one above.\r\n\r\nCould you guys help me with this problem??\r\nThank you in advance!! " ]
1,651
1,697
1,651
NONE
null
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Pretraining a wav2vec-large model using the documentation under [examples/speech-pretraining](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) does not work. Running the following code (copy-pasted from the README) gives an error due to `model_path_or_dir` not found: ``` accelerate launch run_wav2vec2_pretraining_no_trainer.py \ --dataset_name=librispeech_asr \ --dataset_config_names clean clean other \ --dataset_split_names train.100 train.360 train.500 \ --output_dir=./test \ --max_train_steps=200000 \ --num_warmup_steps=32000 \ --gradient_accumulation_steps=8 \ --learning_rate=0.001 \ --weight_decay=0.01 \ --max_duration_in_seconds=20.0 \ --min_duration_in_seconds=2.0 \ --model_name_or_path=./ --logging_steps=1 \ --saving_steps=10000 \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=4 \ --adam_beta1=0.9 \ --adam_beta2=0.98 \ --adam_epsilon=1e-06 \ --gradient_checkpointing \ ``` I tried using ´facebook/wav2vec-large-lv60' in `model_name_or_path` but receive the following error: ``` Traceback (most recent call last): File "run_wav2vec2_pretraining_no_trainer.py", line 730, in <module> main() File "run_wav2vec2_pretraining_no_trainer.py", line 572, in main for step, batch in enumerate(train_dataloader): File "/home/ucloud/.local/lib/python3.8/site-packages/accelerate/data_loader.py", line 303, in __iter__ for batch in super().__iter__(): File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__ data = self._next_data() File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 570, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/ucloud/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "run_wav2vec2_pretraining_no_trainer.py", line 326, in __call__ sampled_negative_indices = _sample_negative_indices( File "/home/ucloud/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 336, in _sample_negative_indices sampled_indices = np.random.randint(0, high, size=(high + 1, num_negatives)) File "mtrand.pyx", line 748, in numpy.random.mtrand.RandomState.randint File "_bounded_integers.pyx", line 1247, in numpy.random._bounded_integers._rand_int64 ValueError: high <= 0 ``` The demo script trains without issue. Using the parameters from the demo script and changing `model_name_or_path` from 'patrickvonplaten/wav2vec2-base-v2` to ´facebook/wav2vec-large-lv60´ gives the above error. Training on a single T4 GPU (benchmarking purposes) ### Expected behavior ```shell Wav2vec-large pretraining to run. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16962/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16961/comments
https://api.github.com/repos/huggingface/transformers/issues/16961/events
https://github.com/huggingface/transformers/pull/16961
1,217,106,874
PR_kwDOCUB6oc423KVb
16,961
Add parameter --config_overrides for run_mlm_wwm.py
{ "login": "conan1024hao", "id": 50416856, "node_id": "MDQ6VXNlcjUwNDE2ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conan1024hao", "html_url": "https://github.com/conan1024hao", "followers_url": "https://api.github.com/users/conan1024hao/followers", "following_url": "https://api.github.com/users/conan1024hao/following{/other_user}", "gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}", "starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions", "organizations_url": "https://api.github.com/users/conan1024hao/orgs", "repos_url": "https://api.github.com/users/conan1024hao/repos", "events_url": "https://api.github.com/users/conan1024hao/events{/privacy}", "received_events_url": "https://api.github.com/users/conan1024hao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger @wlhgtc @LowinLi Please review this PR if you have time thank you.", "Thanks for your PR. Note that we don't maintain research projects, they are pinned to work with a specific version of Transformers. You will need approval from the original author of the script to have this merged :-)", "@sgugger Thank you for reply. However I can not find the earliest history of `run_mlm_wwm.py`... Is @wlhgtc the original author?", "@conan1024hao LGTM\r\nAnd @sgugger can you help merge this PR ? ", "Sure thing!" ]
1,651
1,651
1,651
CONTRIBUTOR
null
## WHY - I noticed that the parameter `--config_overrides` is only available in `run_clm.py`, `run_plm.py` and `run_mlm.py` in `examples/pytorch/language-modeling`, but not available in `run_mlm_wwm.py` in `examples/research_projects/mlm_wwm/run_mlm_wwm.py`. - However, I want to train a wwm model from scratch too, so we need this parameter. ## WHAT - Added the parameter `--config_overrides` in `run_mlm_wwm.py`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16961/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16961", "html_url": "https://github.com/huggingface/transformers/pull/16961", "diff_url": "https://github.com/huggingface/transformers/pull/16961.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16961.patch", "merged_at": 1651157096000 }
https://api.github.com/repos/huggingface/transformers/issues/16960
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16960/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16960/comments
https://api.github.com/repos/huggingface/transformers/issues/16960/events
https://github.com/huggingface/transformers/issues/16960
1,217,074,446
I_kwDOCUB6oc5IixUO
16,960
Word limit with mBART-50 translation
{ "login": "phayat", "id": 99869100, "node_id": "U_kgDOBfPhrA", "avatar_url": "https://avatars.githubusercontent.com/u/99869100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phayat", "html_url": "https://github.com/phayat", "followers_url": "https://api.github.com/users/phayat/followers", "following_url": "https://api.github.com/users/phayat/following{/other_user}", "gists_url": "https://api.github.com/users/phayat/gists{/gist_id}", "starred_url": "https://api.github.com/users/phayat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phayat/subscriptions", "organizations_url": "https://api.github.com/users/phayat/orgs", "repos_url": "https://api.github.com/users/phayat/repos", "events_url": "https://api.github.com/users/phayat/events{/privacy}", "received_events_url": "https://api.github.com/users/phayat/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @phayat ! Could you please post a code-snippet so we could reproduce the issue ? Thanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hey @patil-suraj @phayat have you found the solution " ]
1,651
1,674
1,655
NONE
null
I'm using facebook/mbart-large-50-many-to-many-mmt model to translate french texts to english, but it seems the translation is limited to the first 110 words of the input text. Do you confirm? and is there a way to fix this? Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16960/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16959
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16959/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16959/comments
https://api.github.com/repos/huggingface/transformers/issues/16959/events
https://github.com/huggingface/transformers/pull/16959
1,216,998,923
PR_kwDOCUB6oc422zNW
16,959
Add -e flag to some GH workflow yml files
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The added check will produce something like (if the transformers is not from the expected location)\r\n\r\n<img width=\"777\" alt=\"Screenshot 2022-04-27 131320\" src=\"https://user-images.githubusercontent.com/2521628/165506295-901fd17d-17f6-4fb7-8eec-f96b87d63b03.png\">\r\n\r\n", "after the new cache is built the first time and job completes (pass the check I added too), the next run when we have \r\n\r\n```\r\nCache restored from key: v3-tests_model_like-ce386d6c28d7afcca58dc875de2ef1b7477e8246a0bfdb6ff4de0eb222eafef2\r\n```\r\nthe check failed with\r\n\r\n```\r\ntransformers is from but it shoud be from /home/runner/work/transformers/transformers/src.\r\nA fix is required. Stop testing.\r\n```\r\nwhich means `pip show transformers` gives empty location!\r\n\r\nI will try to make it work - I really like to have this test.\r\n~~(But things should work now if we just remove this test)~~\r\n\r\n", "Confirmed this (with `-e`) currently not working for the subsequentially run (i.e. cache loaded).\r\n\r\nI still think there might be some workaround, let me try. Set to draft for now\r\n\r\n### currently error\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/runner/venv/bin/transformers-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())\r\n File \"/home/runner/venv/bin/transformers-cli\", line [22](https://github.com/huggingface/transformers/runs/6197315688?check_suite_focus=true#step:7:22), in importlib_load_entry_point\r\n for entry_point in distribution(dist_name).entry_points\r\n File \"/home/runner/venv/lib/python3.6/site-packages/importlib_metadata/__init__.py\", line 815, in distribution\r\n return Distribution.from_name(distribution_name)\r\n File \"/home/runner/venv/lib/python3.6/site-packages/importlib_metadata/__init__.py\", line 430, in from_name\r\n raise PackageNotFoundError(name)\r\nimportlib_metadata.PackageNotFoundError: No package metadata was found for transformers\r\nError: Process completed with exit code 1.\r\n```" ]
1,651
1,651
1,651
COLLABORATOR
null
# What does this PR do? Current 2 GitHub actions workflow yml use `pip install [.dev]`. This installs `transformers` in `/home/runner/venv/lib/python3.6/site-packages`, and this is cached. In future job runs, the cache is restored, and that `transformers` version is used - instead of the latest commit, i.e. we want to use `/home/runner/work/transformers/transformers/src`. Without this PR, I have trouble after updating `add_new_model.py` (to change test model folders from `tests/` to `tests/models/`) because the `add_new_model.py` from `/home/runner/venv/lib/python3.6/site-packages` would be used, which would put the test template models under `tests/` This PR makes sure the latest `transformers` is used by using `pip install -e [.dev]`, and builds a new cache with it. - Add -e flag to some GH workflow yml files - change cache key in order to make the change effective - add a check on `transformers` location
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16959/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16959", "html_url": "https://github.com/huggingface/transformers/pull/16959", "diff_url": "https://github.com/huggingface/transformers/pull/16959.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16959.patch", "merged_at": 1651088661000 }
https://api.github.com/repos/huggingface/transformers/issues/16958
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16958/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16958/comments
https://api.github.com/repos/huggingface/transformers/issues/16958/events
https://github.com/huggingface/transformers/pull/16958
1,216,979,828
PR_kwDOCUB6oc422vMK
16,958
Misc. fixes for Pytorch QA examples:
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651
1,651
1,651
CONTRIBUTOR
null
Thank you for the great library! This fixes a number of issues with Pytorch QA examples. All numbers are either the same or went up. However, there are still some issues, which I wasn't able to fix (in one example). Please, see the notes and benchmark results below. # What does this PR do? 1. Fixes evaluation errors popping up when you train/eval on squad v2 (one was newly encountered and one that was previously reported Running SQuAD 1.0 sample command raises IndexError #15401 but not completely fixed). 2. Removes boolean arguments that don't use store_true. Please, don't use these: **ANY** non-empty string is being converted to `True` in this case. This is clearly an **undesired**behavior, which creates a LOT of confusion. 3. All no-trainer test scripts are now saving metric values in the same way (with the right prefix eval_), which is consistent with the trainer-based versions. 4. Adds forgotten model.eval() in the no-trainer versions. This improved some results, but not everything (see the discussion in the end). Please, see the F1 scores and the discussion below. - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. This is a **reduced** PR [as discussed here](https://github.com/huggingface/transformers/pull/16926#issuecomment-1108479241). - [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation** - [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if running more QA tests automatically will be feasible. Do note that the existing "unit-test" is very crude and does not permit detecting small regressions in model quality. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, who reviewed a prior version of this PR. ## Comparing old and new performance + some potential issues Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern: | | previous | new | |-----------------------------------|:--------:|:----:| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v2 | N/A | 75.2 | | squad v2 (no trainer) | N/A | 77.1 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16958/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16958", "html_url": "https://github.com/huggingface/transformers/pull/16958", "diff_url": "https://github.com/huggingface/transformers/pull/16958.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16958.patch", "merged_at": 1651063899000 }
https://api.github.com/repos/huggingface/transformers/issues/16956
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16956/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16956/comments
https://api.github.com/repos/huggingface/transformers/issues/16956/events
https://github.com/huggingface/transformers/issues/16956
1,216,943,599
I_kwDOCUB6oc5IiRXv
16,956
How to train over VERY LARGE dataset?
{ "login": "CaoYiqingT", "id": 45160643, "node_id": "MDQ6VXNlcjQ1MTYwNjQz", "avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaoYiqingT", "html_url": "https://github.com/CaoYiqingT", "followers_url": "https://api.github.com/users/CaoYiqingT/followers", "following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}", "gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions", "organizations_url": "https://api.github.com/users/CaoYiqingT/orgs", "repos_url": "https://api.github.com/users/CaoYiqingT/repos", "events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}", "received_events_url": "https://api.github.com/users/CaoYiqingT/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,651
1,651
1,651
NONE
null
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. @sgugger ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell some tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16956/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16955
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16955/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16955/comments
https://api.github.com/repos/huggingface/transformers/issues/16955/events
https://github.com/huggingface/transformers/issues/16955
1,216,885,736
I_kwDOCUB6oc5IiDPo
16,955
config.json not found!
{ "login": "beyondguo", "id": 37113676, "node_id": "MDQ6VXNlcjM3MTEzNjc2", "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/beyondguo", "html_url": "https://github.com/beyondguo", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "repos_url": "https://api.github.com/users/beyondguo/repos", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Sorry to issue here, I have already asked this question in the forum but haven't receive any response.", "This means the folders were not created. Are you sure you point to locations where the Python script can write?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,651
1,654
1,654
NONE
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-1063-azure-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` ### Who can help? @sgugger I am training a NER model following tutorial: ```python from transformers import TrainingArguments args = TrainingArguments( "saved_models_bert-finetuned-ner-100examples-with-aug", learning_rate=2e-5, num_train_epochs=100, weight_decay=0.01, per_device_train_batch_size = 32, per_device_eval_batch_size = 32, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end = True, metric_for_best_model = 'f1' ) from transformers import Trainer trainer = Trainer( model=model, args=args, train_dataset=new_training_dataset, eval_dataset=tokenized_datasets["validation"].select(range(100)), data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=tokenizer, ) trainer.train() ``` Then I got this error: ```shell --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Input In [28], in <cell line: 14>() 1 from transformers import Trainer 3 trainer = Trainer( 4 model=model, 5 args=args, (...) 11 tokenizer=tokenizer, 12 ) ---> 14 trainer.train() File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1512, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1509 self.control.should_training_stop = True 1511 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control) -> 1512 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1514 if DebugOption.TPU_METRICS_DEBUG in self.args.debug: 1515 if is_torch_tpu_available(): 1516 # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1628, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 1625 self._report_to_hp_search(trial, epoch, metrics) 1627 if self.control.should_save: -> 1628 self._save_checkpoint(model, trial, metrics=metrics) 1629 self.control = self.callback_handler.on_save(self.args, self.state, self.control) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1700, in Trainer._save_checkpoint(self, model, trial, metrics) 1697 self.store_flos() 1699 output_dir = os.path.join(run_dir, checkpoint_folder) -> 1700 self.save_model(output_dir, _internal_call=True) 1701 if self.deepspeed: 1702 # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed 1703 # config `stage3_gather_16bit_weights_on_model_save` is True 1704 self.deepspeed.save_checkpoint(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2128, in Trainer.save_model(self, output_dir, _internal_call) 2125 self.deepspeed.save_checkpoint(output_dir) 2127 elif self.args.should_save: -> 2128 self._save(output_dir) 2130 # Push to the Hub when `save_model` is called by the user. 2131 if self.args.push_to_hub and not _internal_call: File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2180, in Trainer._save(self, output_dir, state_dict) 2178 torch.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME)) 2179 else: -> 2180 self.model.save_pretrained(output_dir, state_dict=state_dict) 2181 if self.tokenizer is not None: 2182 self.tokenizer.save_pretrained(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py:1352, in PreTrainedModel.save_pretrained(self, save_directory, save_config, state_dict, save_function, push_to_hub, max_shard_size, **kwargs) 1350 # Save the config 1351 if save_config: -> 1352 model_to_save.config.save_pretrained(save_directory) 1354 # Save the model 1355 if state_dict is None: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:440, in PretrainedConfig.save_pretrained(self, save_directory, push_to_hub, **kwargs) 437 # If we save using the predefined names, we can load using `from_pretrained` 438 output_config_file = os.path.join(save_directory, CONFIG_NAME) --> 440 self.to_json_file(output_config_file, use_diff=True) 441 logger.info(f"Configuration saved in {output_config_file}") 443 if push_to_hub: File /opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:805, in PretrainedConfig.to_json_file(self, json_file_path, use_diff) 794 def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True): 795 """ 796 Save this instance to a JSON file. 797 (...) 803 is serialized to JSON file. 804 """ --> 805 with open(json_file_path, "w", encoding="utf-8") as writer: 806 writer.write(self.to_json_string(use_diff=use_diff)) FileNotFoundError: [Errno 2] No such file or directory: 'saved_models_bert-finetuned-ner-100examples-with-aug/checkpoint-6/config.json' ``` This is so wired! From my understanding, the config.json file should be written, so such error shouldn't occur. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am not sure if this can be reproduced in another machine. btw, I am using A100. ### Expected behavior ```shell A normal training... ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16955/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16954
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16954/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16954/comments
https://api.github.com/repos/huggingface/transformers/issues/16954/events
https://github.com/huggingface/transformers/pull/16954
1,216,719,811
PR_kwDOCUB6oc4213ZQ
16,954
Initialization
{ "login": "apd10", "id": 57877560, "node_id": "MDQ6VXNlcjU3ODc3NTYw", "avatar_url": "https://avatars.githubusercontent.com/u/57877560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apd10", "html_url": "https://github.com/apd10", "followers_url": "https://api.github.com/users/apd10/followers", "following_url": "https://api.github.com/users/apd10/following{/other_user}", "gists_url": "https://api.github.com/users/apd10/gists{/gist_id}", "starred_url": "https://api.github.com/users/apd10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apd10/subscriptions", "organizations_url": "https://api.github.com/users/apd10/orgs", "repos_url": "https://api.github.com/users/apd10/repos", "events_url": "https://api.github.com/users/apd10/events{/privacy}", "received_events_url": "https://api.github.com/users/apd10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,651
1,651
1,651
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16954/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16954", "html_url": "https://github.com/huggingface/transformers/pull/16954", "diff_url": "https://github.com/huggingface/transformers/pull/16954.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16954.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16953
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16953/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16953/comments
https://api.github.com/repos/huggingface/transformers/issues/16953/events
https://github.com/huggingface/transformers/pull/16953
1,216,641,236
PR_kwDOCUB6oc421nCk
16,953
Add Information Gain Filtration algorithm
{ "login": "mraunak", "id": 83710963, "node_id": "MDQ6VXNlcjgzNzEwOTYz", "avatar_url": "https://avatars.githubusercontent.com/u/83710963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mraunak", "html_url": "https://github.com/mraunak", "followers_url": "https://api.github.com/users/mraunak/followers", "following_url": "https://api.github.com/users/mraunak/following{/other_user}", "gists_url": "https://api.github.com/users/mraunak/gists{/gist_id}", "starred_url": "https://api.github.com/users/mraunak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mraunak/subscriptions", "organizations_url": "https://api.github.com/users/mraunak/orgs", "repos_url": "https://api.github.com/users/mraunak/repos", "events_url": "https://api.github.com/users/mraunak/events{/privacy}", "received_events_url": "https://api.github.com/users/mraunak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:\r\n```\r\npip install -e \".[quality]\"\r\n```\r\nAnd then run them with:\r\n```\r\nmake fixup\r\n```", "Running the command: make fixup, gives an error that does not include terms from my PR,\r\n\r\nThe output with error is shown below. Please guide me on it. Thanks \r\n\r\n(igfprnew) mraunak@bcl-main1:~/transformers$ make fixup\r\nNo library .py files were modified\r\npython utils/custom_init_isort.py\r\npython utils/style_doc.py src/transformers docs/source --max_len 119\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\npython utils/check_copies.py\r\npython utils/check_table.py\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nChecking all models are included.\r\nChecking all models are public.\r\nChecking all models are properly tested.\r\nChecking all objects are properly documented.\r\nChecking all models are in at least one auto class.\r\nutils/check_repo.py:456: UserWarning: Full quality checks require all backends to be installed (with `pip install -e .[dev]` in the Transformers repo, the following are missing: PyTorch, TensorFlow, Flax. While it's probably fine as long as you didn't make any change in one of those backends modeling files, you should probably execute the command above to be on the safe side.\r\nwarnings.warn(\r\npython utils/check_inits.py\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nTraceback (most recent call last):\r\nFile \"utils/check_inits.py\", line 265, in <module>\r\ncheck_submodules()\r\nFile \"utils/check_inits.py\", line 256, in check_submodules\r\nraise ValueError(\r\nValueError: The following submodules are not properly registed in the main init of Transformers:\r\n- sagemaker\r\n- activations\r\n- activations_tf\r\n- convert_slow_tokenizer\r\n- deepspeed\r\n- generation_beam_constraints\r\n- generation_beam_search\r\n- generation_flax_logits_process\r\n- generation_flax_utils\r\n- generation_logits_process\r\n- generation_stopping_criteria\r\n- generation_tf_logits_process\r\n- generation_tf_utils\r\n- generation_utils\r\n- image_utils\r\n- keras_callbacks\r\n- modeling_flax_outputs\r\n- modeling_flax_utils\r\n- modeling_outputs\r\n- modeling_tf_outputs\r\n- modeling_tf_utils\r\n- modeling_utils\r\n- optimization\r\n- optimization_tf\r\n- pytorch_utils\r\n- tf_utils\r\n- trainer\r\n- trainer_pt_utils\r\n- trainer_seq2seq\r\n- trainer_tf\r\n- data.datasets\r\nMake sure they appear somewhere in the keys of `_import_structure` with an empty list as value.\r\nmake: *** [repo-consistency] Error 1", "@LysandreJik thanks for the suggestion! We were able to correct the quality check issues.\r\nLet us know if you need us to run/test anything else. Thank you!", "Thank you for your contributions!", "Thank you for accepting our work!" ]
1,651
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? Adding a new feature for fine-tuning transformer models called Information Gain Filtration 'IGF' ### Motivation The quality of a fine-tuned model depends on the quality of the data samples used for the first few batches. As the process is stochastic, a random seed would influence the quality of the final fine-tuned model. We are proposing a novel and robust fine-tuning method “Information Gain Filtration” (IGF), which filters informative training samples before a fine-tuning (training) and improves the overall training efficiency and final performance of the language model fine-tuning step ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. This can be of interest to @sgugger, Models: - gpt2 Examples: - research_projects/information-gain-filtration: @Tuko,@mraunak
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16953/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16953", "html_url": "https://github.com/huggingface/transformers/pull/16953", "diff_url": "https://github.com/huggingface/transformers/pull/16953.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16953.patch", "merged_at": 1652884742000 }
https://api.github.com/repos/huggingface/transformers/issues/16952
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16952/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16952/comments
https://api.github.com/repos/huggingface/transformers/issues/16952/events
https://github.com/huggingface/transformers/issues/16952
1,216,563,414
I_kwDOCUB6oc5Ig0jW
16,952
cannot import name 'Data2VecForCTC' from 'transformers'
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi,\r\n\r\nData2Vec is only available on the main branch for now: pip install git+https://github.com/huggingface/transformers.git.", "Hi\r\n\r\nI can't find the name **Data2VecForCTC** in https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py either.", "Here it is: https://github.com/huggingface/transformers/blob/dced262409177586bb510b6b724c762fb89da0e8/src/transformers/__init__.py#L880\r\n\r\nNote that: \r\n\r\n```python\r\nfrom transformers import Data2VecAudioForCTC\r\n\r\nmodel = Data2VecAudioForCTC.from_pretrained(\"...\")\r\n```\r\n\r\nshould also already work on master", "Data2VecForCTC is not the same as Data2Vec**Audio**ForCTC ", "Good observation!\r\n\r\nThere is no `Data2VecForCTC` ;-)", "Yes, but there is on the [website](https://huggingface.co/facebook/data2vec-audio-large-960h)\r\n![image](https://user-images.githubusercontent.com/20703486/165815507-0fb4054f-f6e2-4f48-9a2c-ca21a497ad27.png)\r\n", "Soon we'll have a feature that allows you to report this directly on the model repo ;) stay tuned!", "But yes I'll fix it, thanks for reporting ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Opened a PR to fix it: https://huggingface.co/facebook/data2vec-audio-large-960h/discussions/1" ]
1,651
1,654
1,654
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - Huggingface_hub version: 0.2.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following the code sample in https://huggingface.co/facebook/data2vec-audio-large-960h I'm trying to import the name **Data2VecForCTC**, but unsuccessful. Possibly a typo: **Data2VecForCTC** -> **Data2VecAudioForCTC** ### Expected behavior ```shell Correct code sample is expected. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16952/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16952/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16951
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16951/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16951/comments
https://api.github.com/repos/huggingface/transformers/issues/16951/events
https://github.com/huggingface/transformers/pull/16951
1,216,505,148
PR_kwDOCUB6oc421JSf
16,951
:tada: initial commit of scformer
{ "login": "subercui", "id": 11674033, "node_id": "MDQ6VXNlcjExNjc0MDMz", "avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subercui", "html_url": "https://github.com/subercui", "followers_url": "https://api.github.com/users/subercui/followers", "following_url": "https://api.github.com/users/subercui/following{/other_user}", "gists_url": "https://api.github.com/users/subercui/gists{/gist_id}", "starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subercui/subscriptions", "organizations_url": "https://api.github.com/users/subercui/orgs", "repos_url": "https://api.github.com/users/subercui/repos", "events_url": "https://api.github.com/users/subercui/events{/privacy}", "received_events_url": "https://api.github.com/users/subercui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,651
1,654
1,654
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16951/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16951", "html_url": "https://github.com/huggingface/transformers/pull/16951", "diff_url": "https://github.com/huggingface/transformers/pull/16951.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16951.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16950
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16950/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16950/comments
https://api.github.com/repos/huggingface/transformers/issues/16950/events
https://github.com/huggingface/transformers/pull/16950
1,216,499,769
PR_kwDOCUB6oc421IHv
16,950
Revised partial checkpoint support for Sagemaker Model Parallel
{ "login": "cavdard", "id": 44590949, "node_id": "MDQ6VXNlcjQ0NTkwOTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/44590949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cavdard", "html_url": "https://github.com/cavdard", "followers_url": "https://api.github.com/users/cavdard/followers", "following_url": "https://api.github.com/users/cavdard/following{/other_user}", "gists_url": "https://api.github.com/users/cavdard/gists{/gist_id}", "starred_url": "https://api.github.com/users/cavdard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cavdard/subscriptions", "organizations_url": "https://api.github.com/users/cavdard/orgs", "repos_url": "https://api.github.com/users/cavdard/repos", "events_url": "https://api.github.com/users/cavdard/events{/privacy}", "received_events_url": "https://api.github.com/users/cavdard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16950). All of your documentation changes will be reflected on that endpoint.", "> Thanks for your PR. Two comments on it:\r\n> \r\n> 1. This breaks the current behavior of the `Trainer` where each checkpoint can be loaded as a model. In particular, this will push to the Hub the partial checkpoints with no config during training when `push_to_hub=True` (whereas a regular training pushes models that can be used).\r\n> 2. The feature is always on. Maybe we should let the user decide if they want it or not?\r\n\r\nThanks for reviewing. \r\n\r\nIn order user to decide to save/load partial checkpoints or not, we need new training args. I[n my previous PR](https://github.com/huggingface/transformers/pull/16734/files#diff-bfceaff300c851b8e24fc50dc6638482abaec8f7d2a718e877c3828c166bcf79R426-R431), I got feedback not to introduce new HF training args. So we decided to support partial checkpointing as default. \r\n", "There are plenty of other ways to control whether a feature is on or off. For instance, you could use the environment variable `\"SM_HP_MP_PARAMETERS\"`. \r\n\r\nSince this partial checkpointing is completely incompatible with `from_pretrained`, thus won't work with the Hugging Face Hub and its inference widget, it should be turned off by default.", "@sgugger Thanks for you feedback. Based on your comments, we decided to enable partial checkpointing for optimizer state only where model weights will be saved in full. With this approach, model weights will be saved using `save_pretrained`. \r\n\r\nHere is the link for the new PR: https://github.com/huggingface/transformers/pull/17219", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,651
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? - Uses `smp.rdp_rank()` instead of `smp.rank()` for partial checkpoint saving in `should_save`. - Uses `local_state_dict()` with partial checkpoint saving. - Uses `smp.save ` for SMP. - Uses` smp.load ` for SMP. Reorders partial checkpoint loading to happen after wrapping of model, since `smp.load` can only load to a smp model. - Updated checks for the existence of checkpoint files since smp partial checkpoints contain postfixes in addition to filename(example: `filename_0_0` or` filename_0_0_0`). - adds `load_best_model_at_end` support for SMP <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16950/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16950", "html_url": "https://github.com/huggingface/transformers/pull/16950", "diff_url": "https://github.com/huggingface/transformers/pull/16950.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16950.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16949
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16949/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16949/comments
https://api.github.com/repos/huggingface/transformers/issues/16949/events
https://github.com/huggingface/transformers/issues/16949
1,216,425,978
I_kwDOCUB6oc5IgS_6
16,949
LayoutLMV3
{ "login": "logan-markewich", "id": 22285038, "node_id": "MDQ6VXNlcjIyMjg1MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/22285038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/logan-markewich", "html_url": "https://github.com/logan-markewich", "followers_url": "https://api.github.com/users/logan-markewich/followers", "following_url": "https://api.github.com/users/logan-markewich/following{/other_user}", "gists_url": "https://api.github.com/users/logan-markewich/gists{/gist_id}", "starred_url": "https://api.github.com/users/logan-markewich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logan-markewich/subscriptions", "organizations_url": "https://api.github.com/users/logan-markewich/orgs", "repos_url": "https://api.github.com/users/logan-markewich/repos", "events_url": "https://api.github.com/users/logan-markewich/events{/privacy}", "received_events_url": "https://api.github.com/users/logan-markewich/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Duplicate of #16914 ", "@[NielsRogge](https://github.com/NielsRogge)\r\nI have one question for layoutlmv3, does layoutlmv3 can support RE and SER task for xfund dataset now?" ]
1,651
1,652
1,651
NONE
null
### Model description LayoutLMv3 is a pre-trained multimodal Transformer for Document AI with unified text and image masking. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model. For example, LayoutLMv3 can be fine-tuned for both text-centric tasks, including form understanding, receipt understanding, and document visual question answering, and image-centric tasks such as document image classification and document layout analysis. LayoutLMv3 greatly simplifies training and reduces the number of parameters compared to v3, making it an important milestone in document understanding. [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei, Preprint 2022. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [Huggingface Pretrained Download](https://huggingface.co/microsoft/layoutlmv3-base)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16949/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16948
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16948/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16948/comments
https://api.github.com/repos/huggingface/transformers/issues/16948/events
https://github.com/huggingface/transformers/pull/16948
1,216,073,527
PR_kwDOCUB6oc42ztGZ
16,948
Add ResNet to models exportable with ONNX
{ "login": "chamidullinr", "id": 17027085, "node_id": "MDQ6VXNlcjE3MDI3MDg1", "avatar_url": "https://avatars.githubusercontent.com/u/17027085?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chamidullinr", "html_url": "https://github.com/chamidullinr", "followers_url": "https://api.github.com/users/chamidullinr/followers", "following_url": "https://api.github.com/users/chamidullinr/following{/other_user}", "gists_url": "https://api.github.com/users/chamidullinr/gists{/gist_id}", "starred_url": "https://api.github.com/users/chamidullinr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chamidullinr/subscriptions", "organizations_url": "https://api.github.com/users/chamidullinr/orgs", "repos_url": "https://api.github.com/users/chamidullinr/repos", "events_url": "https://api.github.com/users/chamidullinr/events{/privacy}", "received_events_url": "https://api.github.com/users/chamidullinr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16948). All of your documentation changes will be reflected on that endpoint.", "Thanks for the suggestions. One test fails when running slow tests. The error is:\r\n> ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.000244140625\r\n\r\nShould I set --atol=1e-3 or is there some way to fix this?\r\n", "> Should I set --atol=1e-3 or is there some way to fix this?\r\n\r\nAh yes, some models require a lower tolerance due to their architectures. Setting `--atol=1e-3` as the default is fine!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Can't we rebase this branch and add the code for ResNet ?", "I'm taking care of this in #17585, I didn't see this PR before opening mine. @ChainYo\n\nIt should be done by the end of the week! ", "> It should be done by the end of the week!\r\n\r\nPretty cool!" ]
1,650
1,654
1,654
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I added OnnxConfig to make ResNet model available for ONNX conversion. Issue [#16308](https://github.com/huggingface/transformers/issues/16308) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16948/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16948", "html_url": "https://github.com/huggingface/transformers/pull/16948", "diff_url": "https://github.com/huggingface/transformers/pull/16948.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16948.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16947
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16947/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16947/comments
https://api.github.com/repos/huggingface/transformers/issues/16947/events
https://github.com/huggingface/transformers/pull/16947
1,216,053,482
PR_kwDOCUB6oc42zoy-
16,947
Fix multiple deletions of the same files in save_pretrained
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
COLLABORATOR
null
# What does this PR do? Checkpoint sharding introduced a bug in `save_pretrained` for distributed setups, where the function is called on every process (TPU training for instance, or the scripts with Accelerate (see [this issue](https://github.com/huggingface/accelerate/issues/325)). This changes the logic to only remove files when: - they are different from existing ones (which should handle almost all cases since we can expect the save in a folder with existing weights to use the same model) - we are on process 0. Except `save_pretrained` does not know if we hare on process zero or not, so I use `save_config` to detect that. It looks like that argument was not aptly named and should be `is_main_process` instead. I can go ahead and deprecate/rename in this PR if you agree.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16947/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16947", "html_url": "https://github.com/huggingface/transformers/pull/16947", "diff_url": "https://github.com/huggingface/transformers/pull/16947.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16947.patch", "merged_at": 1651076922000 }
https://api.github.com/repos/huggingface/transformers/issues/16946
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16946/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16946/comments
https://api.github.com/repos/huggingface/transformers/issues/16946/events
https://github.com/huggingface/transformers/pull/16946
1,216,021,744
PR_kwDOCUB6oc42zh-w
16,946
[HF Argparser] Fix parsing of optional boolean arguments
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? This PR fixes a weird bug that made optional boolean arguments not being recognized properly in my virtual environment. Replacing "is" by "==" fixed the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16946/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16946", "html_url": "https://github.com/huggingface/transformers/pull/16946", "diff_url": "https://github.com/huggingface/transformers/pull/16946.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16946.patch", "merged_at": 1651064445000 }
https://api.github.com/repos/huggingface/transformers/issues/16945
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16945/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16945/comments
https://api.github.com/repos/huggingface/transformers/issues/16945/events
https://github.com/huggingface/transformers/pull/16945
1,215,993,653
PR_kwDOCUB6oc42zb6X
16,945
Move test model folders
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
COLLABORATOR
null
# What does this PR do? Move test model folders
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16945/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16945", "html_url": "https://github.com/huggingface/transformers/pull/16945", "diff_url": "https://github.com/huggingface/transformers/pull/16945.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16945.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16944
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16944/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16944/comments
https://api.github.com/repos/huggingface/transformers/issues/16944/events
https://github.com/huggingface/transformers/pull/16944
1,215,834,486
PR_kwDOCUB6oc42y51-
16,944
Update codeparrot data preprocessing
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,652
1,652
CONTRIBUTOR
null
This PR updates the preprocessing script of CodeParrot data (python files), we add new filters for: - config and test files - uncommon files (those without a mention of Python classic keyworks: `def`, `for`..) - unusual files (don't use the assignement `=` operator often) - files with low ratio between number of charcaters and number of tokens after tokenization The impact of some of these filters is analyzed in this [tweet](https://twitter.com/LoubnaBenAllal1/status/1514300881419878403?s=20&t=IkodNO5Ma3X866-Yj-LvQw). cc @lvwerra
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16944/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16944", "html_url": "https://github.com/huggingface/transformers/pull/16944", "diff_url": "https://github.com/huggingface/transformers/pull/16944.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16944.patch", "merged_at": 1652705005000 }
https://api.github.com/repos/huggingface/transformers/issues/16943
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16943/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16943/comments
https://api.github.com/repos/huggingface/transformers/issues/16943/events
https://github.com/huggingface/transformers/pull/16943
1,215,720,896
PR_kwDOCUB6oc42yhQl
16,943
Fix `HubertRobustTest` PT/TF equivalence test on GPU
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Think it might be a great idea to add a check to avoid such situation occur in the future. Will do it in another PR." ]
1,650
1,651
1,651
COLLABORATOR
null
# What does this PR do? Fix `HubertRobustTest` PT/TF equivalence test on GPU. Note that `HubertRobustModelTest` has ```python def setUp(self): self.model_tester = HubertModelTester( self, conv_stride=(3, 3, 3), feat_extract_norm="layer", do_stable_layer_norm=True ) ``` but `get_config()` had no ` do_stable_layer_norm=self.do_stable_layer_norm` ## To investigate further - Why no issue on CPU even without this PR - Why using `conv_stride=(4, 4, 4)` (the default value) has no issue on GPU, even without this PR (Does this suggest we have PT/TF Hubert behave differently with `do_stable_layer_norm=False` on GPU when `conv_stride=(3, 3, 3)` etc?) @patrickvonplaten You might have some idea about these points ..?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16943/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16943", "html_url": "https://github.com/huggingface/transformers/pull/16943", "diff_url": "https://github.com/huggingface/transformers/pull/16943.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16943.patch", "merged_at": 1651049403000 }
https://api.github.com/repos/huggingface/transformers/issues/16942
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16942/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16942/comments
https://api.github.com/repos/huggingface/transformers/issues/16942/events
https://github.com/huggingface/transformers/issues/16942
1,215,703,405
I_kwDOCUB6oc5Idilt
16,942
Pretraining code of LayoutLMv2
{ "login": "TejasDuseja", "id": 20604911, "node_id": "MDQ6VXNlcjIwNjA0OTEx", "avatar_url": "https://avatars.githubusercontent.com/u/20604911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TejasDuseja", "html_url": "https://github.com/TejasDuseja", "followers_url": "https://api.github.com/users/TejasDuseja/followers", "following_url": "https://api.github.com/users/TejasDuseja/following{/other_user}", "gists_url": "https://api.github.com/users/TejasDuseja/gists{/gist_id}", "starred_url": "https://api.github.com/users/TejasDuseja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TejasDuseja/subscriptions", "organizations_url": "https://api.github.com/users/TejasDuseja/orgs", "repos_url": "https://api.github.com/users/TejasDuseja/repos", "events_url": "https://api.github.com/users/TejasDuseja/events{/privacy}", "received_events_url": "https://api.github.com/users/TejasDuseja/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, \r\n\r\nMicrosoft hasn't open-sourced any pretraining code, unfortunately. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,653
1,653
NONE
null
I was wondering is pre-training code of LayoutLMv2 model publicly available. @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16942/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16941
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16941/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16941/comments
https://api.github.com/repos/huggingface/transformers/issues/16941/events
https://github.com/huggingface/transformers/pull/16941
1,215,697,430
PR_kwDOCUB6oc42ycU2
16,941
Update tokenization_bertweet.py
{ "login": "datquocnguyen", "id": 2412555, "node_id": "MDQ6VXNlcjI0MTI1NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datquocnguyen", "html_url": "https://github.com/datquocnguyen", "followers_url": "https://api.github.com/users/datquocnguyen/followers", "following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}", "gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions", "organizations_url": "https://api.github.com/users/datquocnguyen/orgs", "repos_url": "https://api.github.com/users/datquocnguyen/repos", "events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/datquocnguyen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
CONTRIBUTOR
null
The emoji version must be either 0.5.4 or 0.6.0. Newer emoji versions have been updated to newer versions of the Emoji Charts, thus not consistent with the one used for pre-processing the pre-training Tweet corpus (i.e. not consistent with the vocab). # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16941/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16941", "html_url": "https://github.com/huggingface/transformers/pull/16941", "diff_url": "https://github.com/huggingface/transformers/pull/16941.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16941.patch", "merged_at": 1651092871000 }
https://api.github.com/repos/huggingface/transformers/issues/16940
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16940/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16940/comments
https://api.github.com/repos/huggingface/transformers/issues/16940/events
https://github.com/huggingface/transformers/issues/16940
1,215,608,063
I_kwDOCUB6oc5IdLT_
16,940
Labels shift in seq2seq example
{ "login": "markovalexander", "id": 22663468, "node_id": "MDQ6VXNlcjIyNjYzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markovalexander", "html_url": "https://github.com/markovalexander", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "repos_url": "https://api.github.com/users/markovalexander/repos", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I should read the documentation carefully..." ]
1,650
1,650
1,650
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.15.0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger, @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Just run https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py script ### Expected behavior For each token in seq2seq target sequence we have to predict the next token. However, i did not found this shift by one token in translation example neither in code nor in sources (I checked tokenizer.as_target_tokenizer, for example) It can be added in this line https://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/examples/pytorch/translation/run_translation.py#L436 P.S. While opening this issue, I also checked CLM script example and met the same problem there. https://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/examples/pytorch/language-modeling/run_clm.py#L437 Am I wrong and miss something or it is really a problem that requires small fix?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16940/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16939
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16939/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16939/comments
https://api.github.com/repos/huggingface/transformers/issues/16939/events
https://github.com/huggingface/transformers/issues/16939
1,215,435,001
I_kwDOCUB6oc5IchD5
16,939
Segmentation fault whenever trying to load model
{ "login": "faysalhossain2007", "id": 1239654, "node_id": "MDQ6VXNlcjEyMzk2NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1239654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/faysalhossain2007", "html_url": "https://github.com/faysalhossain2007", "followers_url": "https://api.github.com/users/faysalhossain2007/followers", "following_url": "https://api.github.com/users/faysalhossain2007/following{/other_user}", "gists_url": "https://api.github.com/users/faysalhossain2007/gists{/gist_id}", "starred_url": "https://api.github.com/users/faysalhossain2007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faysalhossain2007/subscriptions", "organizations_url": "https://api.github.com/users/faysalhossain2007/orgs", "repos_url": "https://api.github.com/users/faysalhossain2007/repos", "events_url": "https://api.github.com/users/faysalhossain2007/events{/privacy}", "received_events_url": "https://api.github.com/users/faysalhossain2007/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @faysalhossain2007,\r\n\r\nCould you please provide a short and reproducible code snippet? E.g. if only the model loading leads to a segmentation fault, \r\ncould you please just provide a 4-liner:\r\n\r\n```python\r\nfrom transformers import AutoModel\r\nmodel = AutoModel.from_pretrained(\"...\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Same error here: `Segmentation fault (core dumped)`.\r\n\r\nEnvironment:\r\n```\r\ntorch==1.11.0+cu113\r\ntransformers==4.30.2\r\nPython 3.7.11\r\n```\r\n\r\nI just ran the following testing code snippet\r\n\r\n`python -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))\"`" ]
1,650
1,687
1,654
NONE
null
### System Info ```shell `conda list # packages in environment at /home/faysal/anaconda3/envs/codexglue: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 1_llvm conda-forge ca-certificates 2021.10.8 ha878542_0 conda-forge certifi 2021.5.30 py36h5fab9bb_0 conda-forge cudatoolkit 10.1.243 h036e899_10 conda-forge ld_impl_linux-64 2.36.1 hea4e1c9_2 conda-forge libblas 3.9.0 14_linux64_openblas conda-forge libcblas 3.9.0 14_linux64_openblas conda-forge libffi 3.4.2 h7f98852_5 conda-forge libgcc-ng 11.2.0 h1d223b6_15 conda-forge libgfortran-ng 11.2.0 h69a702a_15 conda-forge libgfortran5 11.2.0 h5c6108e_15 conda-forge liblapack 3.9.0 14_linux64_openblas conda-forge libnsl 2.0.0 h7f98852_0 conda-forge libopenblas 0.3.20 pthreads_h78a6416_0 conda-forge libstdcxx-ng 11.2.0 he4da1e4_15 conda-forge libzlib 1.2.11 h166bdaf_1014 conda-forge llvm-openmp 13.0.1 he0ac6c6_1 conda-forge mkl 2022.0.1 h8d4b97c_803 conda-forge ncurses 6.3 h27087fc_1 conda-forge ninja 1.10.2 h4bd325d_1 conda-forge numpy 1.19.5 py36hfc0c790_2 conda-forge openssl 1.1.1n h166bdaf_0 conda-forge pip 21.3.1 pyhd8ed1ab_0 conda-forge python 3.6.15 hb7a2778_0_cpython conda-forge python_abi 3.6 2_cp36m conda-forge pytorch 1.4.0 py3.6_cuda10.1.243_cudnn7.6.3_0 pytorch readline 8.1 h46c0cb4_0 conda-forge setuptools 58.0.4 py36h5fab9bb_2 conda-forge sqlite 3.38.2 h4ff8645_0 conda-forge tbb 2021.5.0 h924138e_1 conda-forge tk 8.6.12 h27826a3_0 conda-forge tokenizers 0.5.0 pypi_0 pypi transformers 2.5.0 pypi_0 pypi tree-sitter 0.20.0 pypi_0 pypi wheel 0.37.1 pyhd8ed1ab_0 conda-forge xz 5.2.5 h516909a_1 conda-forge zlib 1.2.11 h166bdaf_1014 conda-forge` ``` ### Who can help? @LysandreJik @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Whenever I try to run the following code from (https://github.com/microsoft/CodeXGLUE ), it ends in a segmentation fault `$ python run.py \ --do_train --do_eval --model_type roberta --model_name_or_path $pretrained_model --config_name roberta-base --tokenizer_name roberta-base --train_filename ../data/train.java-cs.txt.java,../data/train.java-cs.txt.cs --dev_filename ../data/valid.java-cs.txt.java,../data/valid.java-cs.txt.cs --output_dir $output_dir --max_source_length 512 --max_target_length 512 --beam_size 5 --train_batch_size 32 --eval_batch_size 32 --learning_rate 5e-5 --train_steps 100 --eval_steps 50 ### Expected behavior ```shell Original issue in codeXglue: https://github.com/microsoft/CodeXGLUE/issues/117 They ask me to post the issue here. So posting here for further assistance. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16939/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16938
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16938/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16938/comments
https://api.github.com/repos/huggingface/transformers/issues/16938/events
https://github.com/huggingface/transformers/issues/16938
1,215,271,722
I_kwDOCUB6oc5Ib5Mq
16,938
tokenizer return_special_tokens does not work correctly with custom special tokens
{ "login": "armancohan", "id": 6425112, "node_id": "MDQ6VXNlcjY0MjUxMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6425112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/armancohan", "html_url": "https://github.com/armancohan", "followers_url": "https://api.github.com/users/armancohan/followers", "following_url": "https://api.github.com/users/armancohan/following{/other_user}", "gists_url": "https://api.github.com/users/armancohan/gists{/gist_id}", "starred_url": "https://api.github.com/users/armancohan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/armancohan/subscriptions", "organizations_url": "https://api.github.com/users/armancohan/orgs", "repos_url": "https://api.github.com/users/armancohan/repos", "events_url": "https://api.github.com/users/armancohan/events{/privacy}", "received_events_url": "https://api.github.com/users/armancohan/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I also found that - even when you add these specialized tokens it does not tokenise correctly.", "I got it working by using the `tokenizer.get_special_tokens_mask` and passing the `already_has_special_tokens=True`, instead of using the `__call__` function of the tokenizer. \r\nIn any case, the interface for `tokenizer(..., return_special_tokens_mask=True)` is confusing, and one has to look at the actual [source code](https://github.com/huggingface/transformers/blob/aaee4038c3c34faea58c84e04fc88297e2be6cb2/src/transformers/tokenization_utils_base.py#L3007) to figure out `return_special_tokens_mask=True` doesn't work for additional special tokens.", "This happens for both fast and slow tokenizers.\r\nThe problem for fast tokenizers seems to originate in the [rust implementation](https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/tokenization_utils_fast.py#L425) (?)\r\n\r\nThe problem for slow tokenizers seems to be as @armancohan pointed out that `already_has_special_tokens` is not set to True [here](https://github.com/huggingface/transformers/blob/aaee4038c3c34faea58c84e04fc88297e2be6cb2/src/transformers/tokenization_utils_base.py#L3009).\r\n\r\nThanks for this @armancohan, saved me quite some time.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, I encountered the same problem here. Thanks @armancohan for the fast solutions. Should we open a MR to fix the issue?" ]
1,650
1,675
1,654
CONTRIBUTOR
null
The tokenizer's returned `special_tokens_mask` does not take into account the newly added `special_tokens`. ### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0 (False) ``` ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import transformers print(transformers.__version__) tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base') special_tokens_dict = {"additional_special_tokens": ["<test1>", "<test2>"]} tokenizer.add_special_tokens(special_tokens_dict) processed = tokenizer("this <test1> that <test2> this", return_special_tokens_mask=True) tokens = tokenizer.convert_ids_to_tokens(processed.input_ids) for i in range(len(processed.input_ids)): print(f"{processed.input_ids[i]}\t{tokens[i]}\t{processed.special_tokens_mask[i]}") ``` ### Expected behavior ```shell Returned output: 0 <s> 1 9226 this 0 1437 Ġ 0 50265 <test1> 0 14 Ġthat 0 1437 Ġ 0 50266 <test2> 0 42 Ġthis 0 2 </s> 1 Expected output: 0 <s> 1 9226 this 0 1437 Ġ 0 50265 <test1> 1 14 Ġthat 0 1437 Ġ 0 50266 <test2> 1 42 Ġthis 0 2 </s> 1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16938/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16937
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16937/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16937/comments
https://api.github.com/repos/huggingface/transformers/issues/16937/events
https://github.com/huggingface/transformers/pull/16937
1,215,149,102
PR_kwDOCUB6oc42wpRG
16,937
Fixed broken link on pipelines.mdx
{ "login": "JovaniTarnowski", "id": 49798215, "node_id": "MDQ6VXNlcjQ5Nzk4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/49798215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JovaniTarnowski", "html_url": "https://github.com/JovaniTarnowski", "followers_url": "https://api.github.com/users/JovaniTarnowski/followers", "following_url": "https://api.github.com/users/JovaniTarnowski/following{/other_user}", "gists_url": "https://api.github.com/users/JovaniTarnowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/JovaniTarnowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JovaniTarnowski/subscriptions", "organizations_url": "https://api.github.com/users/JovaniTarnowski/orgs", "repos_url": "https://api.github.com/users/JovaniTarnowski/repos", "events_url": "https://api.github.com/users/JovaniTarnowski/events{/privacy}", "received_events_url": "https://api.github.com/users/JovaniTarnowski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16937). All of your documentation changes will be reflected on that endpoint.", "Clicking on it in chrome links to the correct page. What is your browser/setup?", "Hello @LysandreJik I'm using chrome version 101.0.4951.41 on Windows.\r\n\r\nThe url that I'm redirected when clicking the link:\r\nhttps://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/task_summary", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
NONE
null
# What does this PR do? When clicking in task summary the link was redirecting to a 404 not found page. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16937/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16937", "html_url": "https://github.com/huggingface/transformers/pull/16937", "diff_url": "https://github.com/huggingface/transformers/pull/16937.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16937.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16936
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16936/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16936/comments
https://api.github.com/repos/huggingface/transformers/issues/16936/events
https://github.com/huggingface/transformers/issues/16936
1,215,026,066
I_kwDOCUB6oc5Ia9OS
16,936
Adding tokens to `RobertaTokenizer` is fast, but loading the extended tokenizer from disk takes tens of minutes
{ "login": "Witiko", "id": 603082, "node_id": "MDQ6VXNlcjYwMzA4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/603082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Witiko", "html_url": "https://github.com/Witiko", "followers_url": "https://api.github.com/users/Witiko/followers", "following_url": "https://api.github.com/users/Witiko/following{/other_user}", "gists_url": "https://api.github.com/users/Witiko/gists{/gist_id}", "starred_url": "https://api.github.com/users/Witiko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Witiko/subscriptions", "organizations_url": "https://api.github.com/users/Witiko/orgs", "repos_url": "https://api.github.com/users/Witiko/repos", "events_url": "https://api.github.com/users/Witiko/events{/privacy}", "received_events_url": "https://api.github.com/users/Witiko/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, pretty sure this is because `add_tokens` and therefore the `trie` creation is done N times for all the N tokens, which is indeed excruciatingly slow (and completely uncessary).\r\n\r\nI think we can create the `trie` only once, wdyt @SaulLu ", "Hi @Witiko,\r\n\r\nThanks for sharing this issue!\r\n\r\nI share your analysis @Narsil ! When the `from_pretrained` method is called, the tokens are added 1 by 1 in this loop.\r\nhttps://github.com/huggingface/transformers/blob/fa322474060beb3673cf5a3e39ccd3c8ad57ecd3/src/transformers/tokenization_utils_base.py#L1948-L1974\r\n\r\nMy memory may be faulty, but I had the impression that I had already read in an issue / PR that there could be a difficulty to circumvent to achieve this type of change - I can't find it back unfortunately. For the moment, I think that it is necessary to see in this loop that the order of addition has its importance and that we can alternate between addition of normal and special tokens.", "We did it in tokenizers` since the `Trie` insertion order of added tokens should not be important (this is also currently the case in slow tokenizers)\r\n\r\nhttps://github.com/huggingface/tokenizers/blob/main/tokenizers/src/tokenizer/serialization.rs#L172\r\n\r\nThere might be other things to deal with in the python code, but the `Trie` itself doesn't care about insertion order, so we can create it only once.", "Yes I absolutely agree! It just seemed important to mention it because the code that generates the multiple `Trie` builds currently is code that is shared between the fast and python tokenizers. :smile: ", "Thank you for investigating. Should I try and open a PR, or are you planning to tackle this, @Narsil?", "Hi @Witiko ,\r\n\r\nI don't have a lot of bandwidth atm to handle this. If you can try and open a PR that would be awesome.\r\nFeel free to ping me if you want early feedback (doesn't matter if PR is not ready).\r\n\r\nCheers,\r\nNicolas", "Hello @Narsil,\r\n\r\nneither do I, but I can take a stab at it sometime the next month. It seems to me that a simple fix might be to add a boolean parameter `_postpone_optimization` to `add_tokens()`, so that we can prevent the trie from being constructed in `from_pretrained()`. However, this does not solve the problem for users who would manually call `add_tokens()` with many small batches of tokens in their code. A more robust fix would be to construct the trie lazily at the point where is is needed.", "> However, this does not solve the problem for users who would manually call add_tokens() with many small batches of tokens in their code.\r\n\r\n`add_tokens` already accepts lists, so sending the maximum possible amount of tokens in one go is the way to go, laziness is not the solution to this problem here I think.", "> `add_tokens` already accepts lists, so sending the maximum possible amount of tokens in one go is the way to go\r\n\r\nThe current code of `from_pretrained()` calls `add_tokens()` repeatedly with single tokens, so that it can persist the information about whether the token is special or not. Perhaps the way to go would be to first build a list of special and non-special tokens and then call `add_tokens()` once for special and once for non-special tokens?\r\n\r\n> laziness is not the solution to this problem here I think.\r\n\r\nI agree that laziness makes it more difficult to predict performance and reason about the code, especially in multiprocessing settings. Having `add_tokens()` that behaves optimally when you add tokens in bulk seems more straightforward.", "@SaulLu @Narsil I opened PR #17119 that fixes this issue. I would appreciate your review at your earliest convenience.", "Hey @Witiko I have seen your PR, but I am not sure I can do a proper review regarding the implications of this code, I did a small quality improvements suggestions in terms of pure code.\r\n\r\nPinging @SaulLu for visibility (If you don't have time I can look more into this btw, I just figured you would review faster than me)", "@Narsil Thanks to your suggestions, the diff in #17119 against the current code is now tiny. Furthermore, the code causes a net speedup in loading tokenizers, so hopefully we can have the PR merged soon. 🤞", "Thanks a lot for working on this fix.\r\n\r\nOn my side, I'm trying to look at your PR tomorrow. As this is a change that will impact all tokenizers, this is a contribution that requires a very attentive review on our part, that's why it can be a bit long. ", "@SaulLu Thank you. Your caution is appreciated. we wouldn't want to break tokenizers. 😓 " ]
1,650
1,654
1,654
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.10.0-0.bpo.9-amd64-x86_64-with-debian-10.12 - Python version: 3.7.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: false - Using distributed or parallel set-up in script?: false ``` ### Who can help? @SaulLu @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I train a BPE tokenizer on a domain-specific dataset and save it as [`tokenizer-latex.json`](https://github.com/huggingface/transformers/files/8557562/tokenizer-latex.json.txt). ``` python >>> from tokenizers import Tokenizer, normalizers, pre_tokenizers >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> >>> latex_model = BPE(unk_token='[UNK]') >>> latex_tokenizer = Tokenizer(latex_model) >>> latex_tokenizer.pre_tokenizer = pre_tokenizers.WhitespaceSplit() >>> latex_tokenizer.normalizer = normalizers.Sequence([normalizers.Strip()]) >>> latex_tokenizer_trainer = BpeTrainer(special_tokens=['[UNK]']) >>> latex_tokenizer.train(['dataset-latex.txt'], latex_tokenizer_trainer) >>> latex_tokenizer.save('tokenizer-latex.json') ``` Then, I extend [the pre-trained `roberta-base` tokenizer][1] with 28,141 new tokens from the vocabulary of my BPE tokenizer and I save the result to the directory `./extended-roberta-base/`. This finishes in a matter of seconds: ``` python >>> from tokenizers import Tokenizer >>> from transformers import RobertaTokenizer >>> >>> latex_tokenizer = Tokenizer.from_file('tokenizer-latex.json') >>> >>> text_latex_tokenizer = RobertaTokenizer.from_pretrained('roberta-base', add_prefix_space=True) >>> text_latex_tokenizer.add_tokens(list(latex_tokenizer.get_vocab())) 28141 >>> text_latex_tokenizer.save_pretrained('./extended-roberta-base/') ('./extended-roberta-base/tokenizer_config.json', './extended-roberta-base/special_tokens_map.json', './extended-roberta-base/vocab.json', './extended-roberta-base/merges.txt', './extended-roberta-base/added_tokens.json', './extended-roberta-base/tokenizer.json') ``` However, when I load the extended `roberta-base` tokenizer from the directory `./extended-roberta-base/`, the library constructs a trie (see https://github.com/huggingface/transformers/pull/13220) over the course of ca 20 minutes: ``` python >>> from transformers import RobertaTokenizer >>> >>> text_latex_tokenizer = RobertaTokenizer.from_pretrained('./extended-roberta-base/') ^C Traceback (most recent call last): File "<stdin>", line 2, in <module> text_latex_tokenizer = RobertaTokenizer.from_pretrained('./extended-roberta-base/') File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1787, in from_pretrained **kwargs, File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1971, in _from_pretrained tokenizer.add_tokens(token, special_tokens=bool(token in special_tokens)) File "***/python3.7/site-packages/transformers/tokenization_utils_base.py", line 945, in add_tokens return self._add_tokens(new_tokens, special_tokens=special_tokens) File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 444, in _add_tokens self._create_trie(self.unique_no_split_tokens) File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 454, in _create_trie trie.add(token) File "***/python3.7/site-packages/transformers/tokenization_utils.py", line 87, in add ref = ref[char] KeyboardInterrupt ``` The time disparity leads me to believe that when `RobertaTokenizer.add_tokens()` is called, a trie is either not created or is created extremely fast, whereas when `RobertaTokenizer.from_pretrained()` is called, a trie is created (slowly). Using `RobertaTokenizerFast` instead of `RobertaTokenizer` produces similar results at a similar timescale. [1]: https://huggingface.co/roberta-base ### Expected behavior Both `add_tokens()` and `from_pretrained()` should take comparable amount of time. Either building the trie is important and cannot be sped up, in which case `add_tokens()` should also take roughly 20 minutes, or building the trie is unimportant or can be sped up, in which case `from_pretrained()` should finish in a matter of seconds.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16936/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16935
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16935/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16935/comments
https://api.github.com/repos/huggingface/transformers/issues/16935/events
https://github.com/huggingface/transformers/pull/16935
1,214,832,004
PR_kwDOCUB6oc42vk0D
16,935
Limit the use of PreTrainedModel.device
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? I'm currently working on solutions to do model parallelism, offload weights to the CPU or the hard drive, and I've encountered some bugs linked to the way we use the `PreTrainedModel.device`: it grabs the first parameter of the model to infer a device for the whole model. This doesn't work when the model is: - split on several devices and the first parameter grabbed happens to be on the wrong one - not materialized because its parameters are offloaded on the CPU or the hard-drive. So whenever it's possible, it would be great to rely on something else if we can, for instance some device where the inputs are. This PR does this for every use of this `device` attribute in modeling_utils and generation_utils, with the exception of some code where there are no inputs passed so we generate them and have to use something for the device. If all works well, I plan to add all modeling files that make use of that attribute (when in the `dummy_inputs`, I'll leave the `self.device` but outside of it, will grab the device of any inputs we have).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16935/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16935", "html_url": "https://github.com/huggingface/transformers/pull/16935", "diff_url": "https://github.com/huggingface/transformers/pull/16935.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16935.patch", "merged_at": 1650934730000 }
https://api.github.com/repos/huggingface/transformers/issues/16934
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16934/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16934/comments
https://api.github.com/repos/huggingface/transformers/issues/16934/events
https://github.com/huggingface/transformers/pull/16934
1,214,798,725
PR_kwDOCUB6oc42vdwA
16,934
Fix Iterations for decoder
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
The current script works fine if the number of decoder layers = the number of encoder layers. However, it will not work if the number of layers is not equal, like in t5-efficient models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Models: - t5: @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16934/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16934", "html_url": "https://github.com/huggingface/transformers/pull/16934", "diff_url": "https://github.com/huggingface/transformers/pull/16934.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16934.patch", "merged_at": 1650970455000 }
https://api.github.com/repos/huggingface/transformers/issues/16933
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16933/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16933/comments
https://api.github.com/repos/huggingface/transformers/issues/16933/events
https://github.com/huggingface/transformers/pull/16933
1,214,792,058
PR_kwDOCUB6oc42vcVe
16,933
Fix RemBertTokenizerFast
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I could provide the full code sample to reproduce the issue without this PR if necessary.", "_The documentation is not available anymore as the PR was closed or merged._", "Great to know the test is there (in common) 😄 " ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? `RemBertTokenizer(Fast)` are similar to `AlbertTokenizer(Fast)`, the slow versions are based on `SentencePiece`. Unlike `AlbertTokenizerFast`, the fast tokenizer `RemBertTokenizerFast` doesn't have ```python self.can_save_slow_tokenizer = False if not self.vocab_file else True ``` And I got error when I want to call `save_pretrained()` after doing something like ``` tokenizer_fast.train_new_from_iterator(training_ds["text"], 1024) ``` (while working on the task for creating tiny random models/processor) ### Error message without this PR ``` File "/home/yih_dar_huggingface_co/transformers/create_dummy_models.py", line 457, in convert_processors p.save_pretrained(output_folder) File "/home/yih_dar_huggingface_co/transformers/src/transformers/tokenization_utils_base.py", line 2101, in save_pretrained save_files = self._save_pretrained( File "/home/yih_dar_huggingface_co/transformers/src/transformers/tokenization_utils_fast.py", line 591, in _save_pretrained vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) File "/home/yih_dar_huggingface_co/transformers/src/transformers/models/rembert/tokenization_rembert_fast.py", line 237, in save_vocabulary if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file): File "/home/yih_dar_huggingface_co/miniconda3/envs/py-3-9/lib/python3.9/posixpath.py", line 375, in abspath path = os.fspath(path) TypeError: expected str, bytes or os.PathLike object, not NoneType ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16933/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16933", "html_url": "https://github.com/huggingface/transformers/pull/16933", "diff_url": "https://github.com/huggingface/transformers/pull/16933.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16933.patch", "merged_at": 1650909110000 }
https://api.github.com/repos/huggingface/transformers/issues/16932
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16932/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16932/comments
https://api.github.com/repos/huggingface/transformers/issues/16932/events
https://github.com/huggingface/transformers/pull/16932
1,214,737,880
PR_kwDOCUB6oc42vQvU
16,932
CodeParrot data pretokenization
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,652
1,652
CONTRIBUTOR
null
This PR adds code for data pretokenization of CodeParrot . In fact it takes long to tokenize the data especially for small models, so having a pretokenized dataset might improve the training speed. We also fix an error in the `README.md` and `scripts/initialize_model.py` inside the codeparrot repo. cc @lvwerra
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16932/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16932", "html_url": "https://github.com/huggingface/transformers/pull/16932", "diff_url": "https://github.com/huggingface/transformers/pull/16932.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16932.patch", "merged_at": 1652707937000 }
https://api.github.com/repos/huggingface/transformers/issues/16931
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16931/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16931/comments
https://api.github.com/repos/huggingface/transformers/issues/16931/events
https://github.com/huggingface/transformers/issues/16931
1,214,700,023
I_kwDOCUB6oc5IZtn3
16,931
ZeroShotClassificationPipeline not using GPU
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@Ierezell ,\r\n\r\npreprocessing will always happen on CPU so it's not entirely surprising. There's no way to make preprocessing happen on GPU (tokenization) afaik.\r\n\r\nHere you're using 3k sentences X 3k labels so we're looking at 9M individual `input_ids` sequences that have to be generated.\r\n\r\nCan you try doing this:\r\n\r\n```python\r\nfrom transformers.modeling_utils import PreTrainedModel\r\nfrom transformers.models.auto.modeling_auto import AutoModelForSequenceClassification\r\nfrom transformers.models.auto.tokenization_auto import AutoTokenizer\r\nfrom transformers.pipelines.zero_shot_classification import ZeroShotClassificationPipeline\r\nfrom transformers.tokenization_utils import PreTrainedTokenizer\r\nimport torch\r\nimport itertools\r\n\r\nfew_show_classification_model: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained(\"facebook/bart-large-mnli\")\r\nfew_show_classification_model = few_show_classification_model.to(torch.device(\"cuda\"))\r\nfew_show_classification_tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-large-mnli\")\r\n\r\nclassifier = ZeroShotClassificationPipeline(model=few_show_classification_model, tokenizer=few_show_classification_tokenizer, device=0, multi_label=False)\r\n\r\nwords = [\"hello\", \"you\", \"I\", \"am\", \"beautiful\", \"and\", \"we\", \"like\", \"sugar\"]\r\n\r\ndef utterances(words):\r\n for w in itertools.permutations(words):\r\n yield \" \".join(w)\r\ncontexts = [\" \".join(w) for w in list(itertools.permutations(words))[:3000]]\r\n\r\nclassifier(utterances(), contexts)\r\n```\r\n\r\nShould be easier on your RAM, please note that `list(itertools.permutations)` is still creating `8!` (40k) objects.", "Hello @Narsil,\r\n\r\nThanks for the fast reply :) \r\n\r\nIt was my guess but I'm happy to have the confirmation.\r\nI just didn't though that pre-processing could take that much memory (in the example it's too much for sure).\r\n\r\nAs it's utterances X labels the memory requirement can raise quite fast (in my case 10 labels vs 500).\r\nUsing a generator is indeed saving a good part of memory.\r\nMy fix was batching on the labels (contexts).\r\n\r\nThanks again for your time and help. \r\nHave a great day. \r\n\r\n" ]
1,650
1,650
1,650
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.31 - Python version: 3.9.10 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? Hello, @Narsil sorry to bother you once again.... When using a ZeroShotClassificationPipeline, it seems that a lot of preprocessing is done on CPU instead of GPU. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers.modeling_utils import PreTrainedModel from transformers.models.auto.modeling_auto import AutoModelForSequenceClassification from transformers.models.auto.tokenization_auto import AutoTokenizer from transformers.pipelines.zero_shot_classification import ZeroShotClassificationPipeline from transformers.tokenization_utils import PreTrainedTokenizer import torch import itertools few_show_classification_model: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli") few_show_classification_model = few_show_classification_model.to(torch.device("cuda")) few_show_classification_tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli") classifier = ZeroShotClassificationPipeline(model=few_show_classification_model, tokenizer=few_show_classification_tokenizer, device=0, multi_label=False) words = ["hello", "you", "I", "am", "beautiful", "and", "we", "like", "sugar"] utterances = [" ".join(w) for w in list(itertools.permutations(words))[:4]] contexts = [" ".join(w) for w in list(itertools.permutations(words))[:3000]] classifier(utterances, contexts) ``` Model is taking 2.9Gb on GPU RAM (+small other things) The GPU ram is not changing at all during all inference The CPU ram is however using a lot (with a lot of contexts just for the sake of explanation) The CPU ram in my case goes from 5Gb to 22.6Gb and the GPU ram stays the same. ### Expected behavior I was hoping of having a CUDA out of memory error instead of a huge load in CPU ram. Maybe the model is computing on CPU because of a bad initialization? Thanks a lot in advance. Have a great day.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16931/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16930
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16930/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16930/comments
https://api.github.com/repos/huggingface/transformers/issues/16930/events
https://github.com/huggingface/transformers/issues/16930
1,214,565,211
I_kwDOCUB6oc5IZMtb
16,930
[Generation] `length_penalty` means `beam_alpha`
{ "login": "ShaneTian", "id": 42370681, "node_id": "MDQ6VXNlcjQyMzcwNjgx", "avatar_url": "https://avatars.githubusercontent.com/u/42370681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShaneTian", "html_url": "https://github.com/ShaneTian", "followers_url": "https://api.github.com/users/ShaneTian/followers", "following_url": "https://api.github.com/users/ShaneTian/following{/other_user}", "gists_url": "https://api.github.com/users/ShaneTian/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShaneTian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShaneTian/subscriptions", "organizations_url": "https://api.github.com/users/ShaneTian/orgs", "repos_url": "https://api.github.com/users/ShaneTian/repos", "events_url": "https://api.github.com/users/ShaneTian/events{/privacy}", "received_events_url": "https://api.github.com/users/ShaneTian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "See https://github.com/huggingface/transformers/issues/4915 https://github.com/huggingface/transformers/issues/4918#issuecomment-681985118 https://github.com/huggingface/transformers/issues/14768", "cc @patrickvonplaten \r\nhttps://github.com/huggingface/transformers/blame/508baf194313c397345af868202404e285494a28/src/transformers/generation_beam_search.py#L829", "Hey @ShaneTian,\r\n\r\nGood catch! I think that's exactly right - we should change the docs here to:\r\n\r\n```py\r\n length_penalty (`float`, *optional*, defaults to 1.0): \r\n Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the \r\n model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter \r\n sequences. \r\n```\r\n\r\nWould you like to open a pull request for this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@ShaneTian note that the length penalty is different from the \"alpha\" penalty simply because we discovered it first ;-) Would be too difficult to change now though", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,656
1,656
NONE
null
https://github.com/huggingface/transformers/blob/508baf194313c397345af868202404e285494a28/src/transformers/generation_utils.py#L949-L952 https://github.com/huggingface/transformers/blob/508baf194313c397345af868202404e285494a28/src/transformers/generation_beam_search.py#L829 There are two issues about `length_penalty`: 1. Actually, `length_penalty=1` does **NOT** mean no penalty. It is no penalty when `length_penalty=0`, right? 2. This is different from the [`beam_alpha` paper](https://arxiv.org/pdf/1609.08144.pdf), why? <img width="632" alt="image" src="https://user-images.githubusercontent.com/42370681/165103163-b5946237-b03e-4a24-bb3e-2122a9dacec2.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16930/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16930/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16929
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16929/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16929/comments
https://api.github.com/repos/huggingface/transformers/issues/16929/events
https://github.com/huggingface/transformers/pull/16929
1,214,457,990
PR_kwDOCUB6oc42uUgo
16,929
Fix doc test quicktour dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Makes sure that `path` of dataset is not returned. See discussion: https://huggingface.slack.com/archives/C02CH2YP4EQ/p1650887822286239?thread_ts=1650816548.265789&cid=C02CH2YP4EQ ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16929/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16929", "html_url": "https://github.com/huggingface/transformers/pull/16929", "diff_url": "https://github.com/huggingface/transformers/pull/16929.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16929.patch", "merged_at": 1650896819000 }
https://api.github.com/repos/huggingface/transformers/issues/16928
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16928/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16928/comments
https://api.github.com/repos/huggingface/transformers/issues/16928/events
https://github.com/huggingface/transformers/issues/16928
1,214,457,055
I_kwDOCUB6oc5IYyTf
16,928
A bug in modeling_ibert.py
{ "login": "ZhangYunchenY", "id": 55646223, "node_id": "MDQ6VXNlcjU1NjQ2MjIz", "avatar_url": "https://avatars.githubusercontent.com/u/55646223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhangYunchenY", "html_url": "https://github.com/ZhangYunchenY", "followers_url": "https://api.github.com/users/ZhangYunchenY/followers", "following_url": "https://api.github.com/users/ZhangYunchenY/following{/other_user}", "gists_url": "https://api.github.com/users/ZhangYunchenY/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhangYunchenY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhangYunchenY/subscriptions", "organizations_url": "https://api.github.com/users/ZhangYunchenY/orgs", "repos_url": "https://api.github.com/users/ZhangYunchenY/repos", "events_url": "https://api.github.com/users/ZhangYunchenY/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhangYunchenY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
NONE
null
``` class IBertEmbeddings(nn.Module): def __init__(self, config): super().__init__() self.quant_mode = config.quant_mode self.embedding_bit = 8 self.embedding_act_bit = 16 self.act_bit = 8 self.ln_input_bit = 22 self.ln_output_bit = 32 self.word_embeddings = QuantEmbedding( config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id, weight_bit=self.embedding_bit, quant_mode=self.quant_mode, ) self.token_type_embeddings = QuantEmbedding( config.type_vocab_size, config.hidden_size, weight_bit=self.embedding_bit, quant_mode=self.quant_mode ) # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") # End copy self.padding_idx = config.pad_token_id self.position_embeddings = QuantEmbedding( config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx, weight_bit=self.embedding_bit, quant_mode=self.quant_mode, ) # Integer-only addition between embeddings self.embeddings_act1 = QuantAct(self.embedding_act_bit, quant_mode=self.quant_mode) self.embeddings_act2 = QuantAct(self.embedding_act_bit, quant_mode=self.quant_mode) # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file self.LayerNorm = IntLayerNorm( config.hidden_size, eps=config.layer_norm_eps, output_bit=self.ln_output_bit, quant_mode=self.quant_mode, force_dequant=config.force_dequant, ) self.output_activation = QuantAct(self.act_bit, quant_mode=self.quant_mode) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward( self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 ): if position_ids is None: if input_ids is not None: # Create the position ids from the input token ids. Any padded tokens remain padded. position_ids = create_position_ids_from_input_ids( input_ids, self.padding_idx, past_key_values_length ).to(input_ids.device) else: position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) if input_ids is not None: input_shape = input_ids.size() else: input_shape = inputs_embeds.size()[:-1] if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) if inputs_embeds is None: inputs_embeds, inputs_embeds_scaling_factor = self.word_embeddings(input_ids) else: inputs_embeds_scaling_factor = None token_type_embeddings, token_type_embeddings_scaling_factor = self.token_type_embeddings(token_type_ids) embeddings, embeddings_scaling_factor = self.embeddings_act1( inputs_embeds, inputs_embeds_scaling_factor, identity=token_type_embeddings, identity_scaling_factor=token_type_embeddings_scaling_factor, ) if self.position_embedding_type == "absolute": position_embeddings, position_embeddings_scaling_factor = self.position_embeddings(position_ids) embeddings, embeddings_scaling_factor = self.embeddings_act1( embeddings, embeddings_scaling_factor, identity=position_embeddings, identity_scaling_factor=position_embeddings_scaling_factor, ) embeddings, embeddings_scaling_factor = self.LayerNorm(embeddings, embeddings_scaling_factor) embeddings = self.dropout(embeddings) embeddings, embeddings_scaling_factor = self.output_activation(embeddings, embeddings_scaling_factor) return embeddings, embeddings_scaling_factor def create_position_ids_from_inputs_embeds(self, inputs_embeds): """ We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. Args: inputs_embeds: torch.Tensor Returns: torch.Tensor """ input_shape = inputs_embeds.size()[:-1] sequence_length = input_shape[1] position_ids = torch.arange( self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device ) return position_ids.unsqueeze(0).expand(input_shape) ``` In _modeling_ibert.py line 147_ may use a wrong quantizer, should be self.embeddings_act2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16928/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16927
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16927/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16927/comments
https://api.github.com/repos/huggingface/transformers/issues/16927/events
https://github.com/huggingface/transformers/pull/16927
1,214,316,888
PR_kwDOCUB6oc42t23H
16,927
Fix wrong image conditional checking
{ "login": "Charlyo", "id": 7512047, "node_id": "MDQ6VXNlcjc1MTIwNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7512047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Charlyo", "html_url": "https://github.com/Charlyo", "followers_url": "https://api.github.com/users/Charlyo/followers", "following_url": "https://api.github.com/users/Charlyo/following{/other_user}", "gists_url": "https://api.github.com/users/Charlyo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Charlyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Charlyo/subscriptions", "organizations_url": "https://api.github.com/users/Charlyo/orgs", "repos_url": "https://api.github.com/users/Charlyo/repos", "events_url": "https://api.github.com/users/Charlyo/events{/privacy}", "received_events_url": "https://api.github.com/users/Charlyo/received_events", "type": "User", "site_admin": false }
[ { "id": 4235521865, "node_id": "LA_kwDOCUB6oc78dO9J", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20extractors", "name": "Feature extractors", "color": "c2e0c6", "default": false, "description": "" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16927). All of your documentation changes will be reflected on that endpoint.", "Hi,\r\n\r\nThanks for your PR. If we go ahead with this, then it should also be fixed for all other feature extractors, like ViT, BEiT, DeiT, etc. cc @patil-suraj ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Gently pinging @Charlyo, let me know if you want to finish this, otherwise happy to take over :) ", "Sorry guys! Feel free to take over @patil-suraj :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,656
1,656
NONE
null
Apparently, If an empty batch comes in, it is considered valid. However, then line 138 brakes since it tries to access element 0 of an empty batch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16927/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16927", "html_url": "https://github.com/huggingface/transformers/pull/16927", "diff_url": "https://github.com/huggingface/transformers/pull/16927.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16927.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16926
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16926/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16926/comments
https://api.github.com/repos/huggingface/transformers/issues/16926/events
https://github.com/huggingface/transformers/pull/16926
1,214,308,233
PR_kwDOCUB6oc42t1FG
16,926
Pytorch QA examples fix & clean-up (code dedup)
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR! The first point is not something we want, as we would like the user to see all the code in one file for the data preparation/basic postprocessing. The functions put in the `utils` module are a bit different in the sense we don't expect users to change that code, but adapting `prepare_train_features` to one's dataset for instance, is something we can expect. It comes with duplicate code, so some additional work on our side for maintenance, but that's okay since it's for the end user's benefit :-)\r\n\r\nChanges 2 to 5 are very much welcome however. Would you mind just removing change 1 from your PR?", "Hi @sgugger thank for a quick review. I can certainly revert these and retest/re-pull request.\r\n\r\nBTW, do you have an idea why there's such a gap in performance for SQuAD v2 in the case of the beam-search version. There's also a small difference for v1. I would really love to know what's wrong here. Trainers are cool, but for some custom cases modifying a PyTorch training loop is so much easier.", "There is probably another bug in the beam search version for squad V2. As this example is not used by a lot of people, we just didn't notice until now 😬 " ]
1,650
1,650
1,650
CONTRIBUTOR
null
Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below. # What does this PR do? 1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version) 2. Fixes evaluation errors popping up when you train/eval on squad v2 (one was newly encountered and one that was previously reported #15401 but not completely fixed). 3. Removes boolean arguments that don't use `store_true`. Please, don't use these: **ANY* non-empty string is being converted to True in this case and this clearly is not the desired behavior (and it creates a **LOT** of confusion). 4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``), which is consistent with the **trainer**-based versions. 5. Adds forgotten `model.eval()` in the **no-trainer** versions. This improved some results, but not everything (see the discussion in the end). Please, see the F1 scores and the discussion below. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation** - [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if running more QA tests automatically will be feasible. Do note that the existing "unit-test" is very crude and does not permit detecting small regressions in model quality. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj. ## Comparing old and new performance + some potential issues Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern. | | previous | fixed | |-----------------------------------|----------|-------| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16926/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16926", "html_url": "https://github.com/huggingface/transformers/pull/16926", "diff_url": "https://github.com/huggingface/transformers/pull/16926.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16926.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16925
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16925/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16925/comments
https://api.github.com/repos/huggingface/transformers/issues/16925/events
https://github.com/huggingface/transformers/pull/16925
1,214,286,477
PR_kwDOCUB6oc42twmb
16,925
Update build_pr_documentation.yml
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
Testing https://github.com/huggingface/doc-builder/pull/197 [don't merge]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16925/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16925", "html_url": "https://github.com/huggingface/transformers/pull/16925", "diff_url": "https://github.com/huggingface/transformers/pull/16925.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16925.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16924
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16924/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16924/comments
https://api.github.com/repos/huggingface/transformers/issues/16924/events
https://github.com/huggingface/transformers/issues/16924
1,214,265,611
I_kwDOCUB6oc5IYDkL
16,924
[BigScience176B] Model conversion from Megatron-LM to transformers
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I will start by linking this thread to this PR. A possible next step and improvement could be to get some ideas of the equivalency tests that has been made by this PR bigscience-workshop/Megatron-DeepSpeed#121", "Also I think the big picture is we want to generate ASAP. (perhaps even before checking exactitude of conversion :S)\r\n", "As a side note, I'm working on a solution to do model parallelism/offload while maximizing the GPU(s) memory/RAM available which should be useful to run this model on all kinds of setups (albeit more slowly). Should land in Accelerate in the coming weeks :-) ", "After running several tests on JZ using a small model (but way larger than the debug model):\r\n+ I fixed issues regarding some operations that were not considered in the previous version of the model a4fa70c1a5042fdca7d0fbf26b0aad6ca99fdadc \r\n+ There are still small discrepancies most likely due to `MixedFusedLayerNorm` used in Meg-DS. A workaround is to use the `apex.normalization.FusedLayerNorm` but needs apex to be installed. \r\n\r\nBut overall the model outputs the same logits with little discrepancies (`torch.test.assert_all_close` pass for eg with `rtol=1e-7, atol=1e-02`), exact same argmax for the same input as well as some metrics (min max mean) which are the same. I still need to dig more to understand what causes this very small differences.\r\n\r\nI will move soon to trying these tests with the large model and see how it will impact everything (maybe we will be able to figure out what causes this little changes)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing this issue since the whole discussion around the model conversion has been done in the WIP PR: #17202 " ]
1,650
1,653
1,653
CONTRIBUTOR
null
### Feature request Creating here a thread for the model conversion of the BigScience-176B model from Megatron-LM to the transformers library. I will summarize here what I have done so far, the current status of the conversion procedure, as well as a summary of the 'discoveries' that I have made and the small details that we have to care about when dealing with the conversion procedure. I did my work by forking @thomwolf 's fork. The tests has been done on the DGX machine (4 NVIDIA A100). ## :cherry_blossom: Big picture - Generate some samples with a recent checkpoint - Testing the exactness of the logits / hidden states values that we obtain using the same input, between the model from Megatron-LM and the converted model. We use a small GPT2 trained on a dummy dataset (2 sentences). This model has been pushed on the hub and being used for integration tests. - Apply these tests on a recent checkpoint of the 176B model to make sure about the robustness of the tests ## :paperclip: Main links: - First PR: thomwolf/transformers#1 - WIP PR: thomwolf/transformers#2 - Final PR: #16514 - [The Small debug-GPT2 model](https://huggingface.co/bigscience/bigscience-small-testing) ## :hammer: Current status - For now, all tests pass on the DGX's GPU (using different conda environments between the Megatron-LM model & the transformers model) with ```assertEqual```. - The tests does not pass with ```assertEqual``` when running them on the CPU, but they pass with ```assertAlmostEqual```, with a tolerance of (0.05) for the logits after the ```LayerNorm``` on the Embedding layer and a tolerance ```1e-06``` on the final logits. Check the tests [here](https://github.com/thomwolf/transformers/blob/bigscience176b/tests/bigscience176b/test_embeddings_bigscience176b.py). This behavior of *non-exactness* seem to be expected and we can not do much about that according to pytorch/pytorch#76052 - Added simple reconstruction and encoding tests on the BigScience tokenizer ## :pushpin: Tips for conversion + Explicitly specify the dtype of your modules when initializing them seem to be helpful to ensure exact reproducibility - added an argument `dtype` on the config file + Concatenating the weights from Row-parallelized weights seem to return unsimilar results, I made a reproducible script and raised an issue pytorch/pytorch#76232 the solution for now is to [manually aggregate the results across each TP rank](https://github.com/younesbelkada/transformers/blob/c83999f991137bd475d409a4b70f4903b256e608/src/transformers/models/bigscience176b/modeling_bigscience176b.py#L251). Needs further investigation for possible improvement of the conversion. ## :white_check_mark: Next steps - [x] Fix integration tests on the PR thomwolf/transformers#2 - [x] Define which checkpoint to use for the next tests - [x] Convert the model with the selected checkpoints and compare the hidden states values between the 2 models. -> fixed some issues in this new commit a4fa70c1a5042fdca7d0fbf26b0aad6ca99fdadc - [x] `MixedFusedLayerNorm` and `FusedScaledSoftmax` seem to be replaceable respectively by `LayerNorm` and `Softmax` from `torch.nn`. Verify this assumption on the new checkpoints. - [ ] Convert a sharded version of the large model and try the tests on that cc @thomwolf @suzana-ilic @thomasw21 @stas00 ### Motivation The feature request is related to the [BigScience workshop](https://bigscience.huggingface.co/), where a large Language Model is currently being trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM). ### Your contribution Ultimately submitting a PR to add the BigScience-176B model to the transformers library - by ensuring the exactness of the operations between the converted model and the original trained model on Megatron-LM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16924/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16924/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16923
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16923/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16923/comments
https://api.github.com/repos/huggingface/transformers/issues/16923/events
https://github.com/huggingface/transformers/pull/16923
1,214,252,837
PR_kwDOCUB6oc42tpzo
16,923
QA examples fixing & clean-up:
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,650
1,650
1,650
CONTRIBUTOR
null
Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below. # What does this PR do? 1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version) 2. Fixes evaluation errors popping up when you train/eval on squad v2-eval : utils_qa.py (one was newly encountered and one that was previously reported #15401). 3. Fixes SQuAD unit tests and ensures all boolean arguments use store_true. Please, don't use boolean arguments that don't use store_true. False or false or any non-empty string is being converted to True in this case and this clearly is not the desired behavior. 4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``). Previously, because of the bug described in item 3, the unit test was using SQuAD v2 metrics instead of SQuAD v1 metrics. v2 uses different metric name for the exact match. This was previously "fixed" at the level of the ``run_qa_no_trainer.py``. However, such a fix isn't necessary any more. 5. Adds forgotten model.eval() in the **no-trainer** versions. This fully fixed training of the **no-trainer** variant for the regular squad QA model **without** beam search. In the case of **using beam-search** the gap decreased, but not fully. Yet it is small. Unfortunately, the beam-search SQuAD v2 version produces strange numbers (the older code is even worse), so this requires extra investigation IMHO. Please, see the F1 scores and the discussion below. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation** - [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. I also **fixed** a unit test so it uses the proper SQuAD (v1) metric. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj. ## Comparing old and new performance Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern. | | previous | fixed | |-----------------------------------|----------|-------| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16923/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16923", "html_url": "https://github.com/huggingface/transformers/pull/16923", "diff_url": "https://github.com/huggingface/transformers/pull/16923.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16923.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16922
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16922/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16922/comments
https://api.github.com/repos/huggingface/transformers/issues/16922/events
https://github.com/huggingface/transformers/pull/16922
1,214,236,083
PR_kwDOCUB6oc42tmZ0
16,922
Spanish translation of the file philosophy.mdx
{ "login": "jkmg", "id": 13305243, "node_id": "MDQ6VXNlcjEzMzA1MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/13305243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jkmg", "html_url": "https://github.com/jkmg", "followers_url": "https://api.github.com/users/jkmg/followers", "following_url": "https://api.github.com/users/jkmg/following{/other_user}", "gists_url": "https://api.github.com/users/jkmg/gists{/gist_id}", "starred_url": "https://api.github.com/users/jkmg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jkmg/subscriptions", "organizations_url": "https://api.github.com/users/jkmg/orgs", "repos_url": "https://api.github.com/users/jkmg/repos", "events_url": "https://api.github.com/users/jkmg/events{/privacy}", "received_events_url": "https://api.github.com/users/jkmg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Spanish translation of the file philosophy.mdx #15947", "Thank you @jkmg! Could you please add `philosophy` to [`transformers/docs/source/es/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml)? As a reference, you can use the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide (section \"✍️ Start translating\"). This would allow the tests to pass.", "_The documentation is not available anymore as the PR was closed or merged._", "@jkmg Thank you very much! Merged 🤗. Please let me know, through the #15947, if you wish to translate another part of the docs. \r\n" ]
1,650
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16922/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16922", "html_url": "https://github.com/huggingface/transformers/pull/16922", "diff_url": "https://github.com/huggingface/transformers/pull/16922.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16922.patch", "merged_at": 1652320070000 }
https://api.github.com/repos/huggingface/transformers/issues/16921
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16921/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16921/comments
https://api.github.com/repos/huggingface/transformers/issues/16921/events
https://github.com/huggingface/transformers/pull/16921
1,214,209,011
PR_kwDOCUB6oc42tg5t
16,921
QA examples fixing & clean-up:
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
Thank you for the great library! I had to clean up QA examples, because of the duplicate pre- and post-processing code. However, while doing so I have encountered a number of issues that I had to fix. Please, see details below. # What does this PR do? 1. refactoring: consolidating duplicate post-processing functions in a helper file (now shared between regular and no-trainer version) 2. Fixes evaluation errors popping up when you train/eval on squad v2-eval : utils_qa.py (one was newly encountered and one that was previously reported #15401). 3. Fixes SQuAD unit tests and ensures all boolean arguments use store_true. Please, don't use boolean arguments that don't use store_true. False or false or any non-empty string is being converted to True in this case and this clearly is not the desired behavior. 4. all **no-trainer** test scripts are now saving metric values in the same way (with the right prefix ``eval_``). Previously, because of the bug described in item 3, the unit test was using SQuAD v2 metrics instead of SQuAD v1 metrics. v2 uses different metric name for the exact match. This was previously "fixed" at the level of the ``run_qa_no_trainer.py``. However, such a fix isn't necessary any more. 5. Adds forgotten model.eval() in the **no-trainer** versions. This fully fixed training of the **no-trainer** variant for the regular squad QA model **without** beam search. In the case of **using beam-search** the gap decreased, but not fully. Yet it is small. Unfortunately, the beam-search SQuAD v2 version produces strange numbers (the older code is even worse), so this requires extra investigation IMHO. Please, see the F1 scores and the discussion below. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] You make sure to update the documentation with your changes? **I believe examples aren't covered by the documentation** - [X] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. I also **fixed** a unit test so it uses the proper SQuAD (v1) metric. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj. ## Comparing old and new performance Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern. | | previous | fixed | |-----------------------------------|----------|-------| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16921/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16921", "html_url": "https://github.com/huggingface/transformers/pull/16921", "diff_url": "https://github.com/huggingface/transformers/pull/16921.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16921.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16920
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16920/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16920/comments
https://api.github.com/repos/huggingface/transformers/issues/16920/events
https://github.com/huggingface/transformers/pull/16920
1,214,150,782
PR_kwDOCUB6oc42tVRT
16,920
Fix `KeyError` when initialize the model with `ignore_mismatched_sizes=True`
{ "login": "tricktreat", "id": 25740077, "node_id": "MDQ6VXNlcjI1NzQwMDc3", "avatar_url": "https://avatars.githubusercontent.com/u/25740077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tricktreat", "html_url": "https://github.com/tricktreat", "followers_url": "https://api.github.com/users/tricktreat/followers", "following_url": "https://api.github.com/users/tricktreat/following{/other_user}", "gists_url": "https://api.github.com/users/tricktreat/gists{/gist_id}", "starred_url": "https://api.github.com/users/tricktreat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tricktreat/subscriptions", "organizations_url": "https://api.github.com/users/tricktreat/orgs", "repos_url": "https://api.github.com/users/tricktreat/repos", "events_url": "https://api.github.com/users/tricktreat/events{/privacy}", "received_events_url": "https://api.github.com/users/tricktreat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `KeyError` is thrown when the model is initialized from pre-trained weights with `ignore_mismatched_sizes=True`. Reproduced as follows: ```python >>> import transformers >>> transformers.BertModel.from_pretrained("bert-base-cased", ignore_mismatched_sizes=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/shenyl/miniconda/envs/cuda111/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1882, in from_pretrained model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model( File "/home/shenyl/miniconda/envs/cuda111/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2003, in _load_pretrained_model and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape KeyError: 'bert.embeddings.LayerNorm.weight' ``` The cause is that the key modified by function `_fix_key` is not found in `state_dict`. The solution is to use the original loaded keys when finding the mismatched keys.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16920/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16920", "html_url": "https://github.com/huggingface/transformers/pull/16920", "diff_url": "https://github.com/huggingface/transformers/pull/16920.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16920.patch", "merged_at": 1651008592000 }
https://api.github.com/repos/huggingface/transformers/issues/16919
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16919/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16919/comments
https://api.github.com/repos/huggingface/transformers/issues/16919/events
https://github.com/huggingface/transformers/pull/16919
1,214,124,042
PR_kwDOCUB6oc42tPxV
16,919
[TESTING] Update build_pr_documentation.yml
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
Important: not to be merged Testing https://github.com/huggingface/doc-builder/pull/197
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16919/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16919", "html_url": "https://github.com/huggingface/transformers/pull/16919", "diff_url": "https://github.com/huggingface/transformers/pull/16919.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16919.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16918
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16918/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16918/comments
https://api.github.com/repos/huggingface/transformers/issues/16918/events
https://github.com/huggingface/transformers/pull/16918
1,214,005,204
PR_kwDOCUB6oc42s3jk
16,918
refactoring & fixing Pytorch QA examples
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? This PR improves/fixes Pytorch QA examples in several ways: 1. extracted duplicating post-processing functions (now shared between regular and no-trainer version) 2. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported. 3. added forgotten model.eval() in the "no-trainer" versions. This fully fixed training of the **no-trainer** variant for the regular squad QA model. There might be still a small gap left for regular SQuAD (it might be just unlucky seed) and a big one for SQuAD v2. Please, see the numbers and the discussion below. <!-- Remove if not applicable --> Fixes #15401 (that was only partially fixed) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes? - [ ] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj. ## Comparing old and new performance Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern. | | previous | fixed | |-----------------------------------|----------|-------| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16918/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16918", "html_url": "https://github.com/huggingface/transformers/pull/16918", "diff_url": "https://github.com/huggingface/transformers/pull/16918.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16918.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16917
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16917/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16917/comments
https://api.github.com/repos/huggingface/transformers/issues/16917/events
https://github.com/huggingface/transformers/pull/16917
1,213,975,757
PR_kwDOCUB6oc42sxWi
16,917
refactoring & fixing Pytorch QA examples
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? This PR improves/fixes Pytorch QA examples in several ways: 1. extracted duplicating post-processing functions (now shared between regular and no-trainer version) 2. added/fixed code to save statistics to some no-trainer versions (one was even buggy) 3. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported. 4. added forgotten model.eval() in the "no-trainer" versions. This fully fixed training of the **no-trainer** variant for the regular squad QA model. There might be still a small gap left for regular SQuAD (it might be just unlucky seed) and a big one for SQuAD v2. Please, see the numbers and the discussion below. <!-- Remove if not applicable --> Fixes #15401 (that was only partially fixed) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? **I trained squad and squad v2 models and compared results (see the discussion below)**, but I am not sure if any new tests are needed. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of most interest for @sgugger, @patil-suraj. ## Comparing old and new performance Some remaining issues: 1. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version for SQuAD v2 or the beam-search version. 2. In particular, for SQuAD v2 and the beam-search variant **without trainer**, both old and new numbers look very wrong to me. Please note that to be able to run SQuAD v2 tests, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: The metric is F1, the exact scores have the same pattern. | | previous | fixed | |-----------------------------------|----------|-------| | squad v1 | 88.4 | 88.4 | | squad v1 (no trainer) | 86.7 | 88.5 | | squad v1 (beam search) | 92.1 | 92.1 | | squad v1 (beam search no trainer) | 90.2 | 91.0 | | squad v2 (beam search) | 83.2 | 83.2 | | squad v2 (beam search no trainer) | 4.9 | 50.1 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16917/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16917", "html_url": "https://github.com/huggingface/transformers/pull/16917", "diff_url": "https://github.com/huggingface/transformers/pull/16917.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16917.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16916
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16916/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16916/comments
https://api.github.com/repos/huggingface/transformers/issues/16916/events
https://github.com/huggingface/transformers/pull/16916
1,213,926,795
PR_kwDOCUB6oc42sm35
16,916
refactoring & fixing Pytorch QA examples
{ "login": "searchivarius", "id": 825650, "node_id": "MDQ6VXNlcjgyNTY1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/825650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/searchivarius", "html_url": "https://github.com/searchivarius", "followers_url": "https://api.github.com/users/searchivarius/followers", "following_url": "https://api.github.com/users/searchivarius/following{/other_user}", "gists_url": "https://api.github.com/users/searchivarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/searchivarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/searchivarius/subscriptions", "organizations_url": "https://api.github.com/users/searchivarius/orgs", "repos_url": "https://api.github.com/users/searchivarius/repos", "events_url": "https://api.github.com/users/searchivarius/events{/privacy}", "received_events_url": "https://api.github.com/users/searchivarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? This PR improves/fixes Pytorch QA examples in several ways: 1. extracted duplicating post-processing functions (now shared between regular and no-trainer version) 2. added/fixed code to save statistics to some no-trainer versions (one was even buggy) 3. fixed squad v2-eval error in utils_qa.py (one was newly encountered and one that was previously reported, it was fixed for regular squad QA model, but not for the model that uses beam search: #15401) 4. added forgotten model.eval() in the "no-trainer" versions. <!-- Remove if not applicable --> Fixes # 15401 (that was only partially fixed) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] **I believe examples aren't covered by the documentation** you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] **Examples do not have tests, however, I trained squad and squad v2 models and compare results (see the discussion below)** Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Perhaps, this can be of interest for @sgugger, @patil-suraj. ## Comparing old and new performance Let me for simplicity just dump all the numbers here. Old means old example code, the new means the new one (refactored and with model.eval() added). As you can see in all the cases, the new code produces the same or better scores. **The only weird case is SQUAD v2** in the no-trainer mode. Some remaining issues: 1. Frankly speaking, both old and new numbers look weird to me. 2. Despite the fixes & improvements, there's still a discrepancy between no-trainer and original version. Also note that to be able to run SQuAD v2, **I had to apply utils_qa.py fixes to the old code as well**. Otherwise, it would have just failed: ``` ./old/run_qa_beam_search/1/eval_results.json: "eval_f1": 92.13525592586892, ./old/run_qa_beam_search_no_trainer/1/eval_results.json: "f1": 90.18714704272388 ./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_HasAns_f1": 85.72401519644451, ./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_NoAns_f1": 80.7569386038688, ./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1": 84.39435986584854, ./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1_thresh": -15.705571174621582, ./old/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_f1": 83.23692092011478, ./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "f1": 4.879995879559425, ./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "HasAns_f1": 5.23619957456295, ./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "NoAns_f1": 4.524810765349033, ./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1": 50.07346266505704, ./old/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1_thresh": -18.9681339263916 ./old/run_qa/1/eval_results.json: "eval_f1": 88.3974945885421, ./old/run_qa_notrainer/1/eval_results.json: "eval_f1": 86.7048555821845, ./new/run_qa_beam_search/1/eval_results.json: "eval_f1": 92.13525592586892, ./new/run_qa_beam_search_no_trainer/1/eval_results.json: "f1": 91.04994418518018 ./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_HasAns_f1": 85.72401519644451, ./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_NoAns_f1": 80.7569386038688, ./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1": 84.39435986584854, ./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_best_f1_thresh": -15.705571174621582, ./new/run_qa_beam_search_squad_v2/1/eval_results.json: "eval_f1": 83.23692092011478, ./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "f1": 50.07159100480081, ./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "HasAns_f1": 0.0, ./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "NoAns_f1": 100.0, ./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1": 50.07159100480081, ./new/run_qa_beam_search_no_trainer_squad_v2/1/eval_results.json: "best_f1_thresh": 0.0 ./new/run_qa/1/eval_results.json: "eval_f1": 88.3974945885421, ./new/run_qa_notrainer/1/eval_results.json: "f1": 88.45989105569917 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16916/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16916", "html_url": "https://github.com/huggingface/transformers/pull/16916", "diff_url": "https://github.com/huggingface/transformers/pull/16916.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16916.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16915
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16915/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16915/comments
https://api.github.com/repos/huggingface/transformers/issues/16915/events
https://github.com/huggingface/transformers/pull/16915
1,213,919,368
PR_kwDOCUB6oc42slPZ
16,915
Option to return output object from Trainer.evaluate
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Why not use the `predict` method in this case?", "Thanks @sgugger. I didn't know predict can return metrics too." ]
1,650
1,651
1,651
NONE
null
Added an option to return output object from Trainer.evaluate along with metrics. This can help in analyses which use predicted results of evaluation dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16915/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16915", "html_url": "https://github.com/huggingface/transformers/pull/16915", "diff_url": "https://github.com/huggingface/transformers/pull/16915.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16915.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16914
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16914/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16914/comments
https://api.github.com/repos/huggingface/transformers/issues/16914/events
https://github.com/huggingface/transformers/issues/16914
1,213,858,275
I_kwDOCUB6oc5IWgHj
16,914
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking
{ "login": "seanbenhur", "id": 43300345, "node_id": "MDQ6VXNlcjQzMzAwMzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43300345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seanbenhur", "html_url": "https://github.com/seanbenhur", "followers_url": "https://api.github.com/users/seanbenhur/followers", "following_url": "https://api.github.com/users/seanbenhur/following{/other_user}", "gists_url": "https://api.github.com/users/seanbenhur/gists{/gist_id}", "starred_url": "https://api.github.com/users/seanbenhur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seanbenhur/subscriptions", "organizations_url": "https://api.github.com/users/seanbenhur/orgs", "repos_url": "https://api.github.com/users/seanbenhur/repos", "events_url": "https://api.github.com/users/seanbenhur/events{/privacy}", "received_events_url": "https://api.github.com/users/seanbenhur/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @NielsRogge :)" ]
1,650
1,653
1,653
NONE
null
### Model description LayoutLMV3 is the successor of the LayoutLM models. the models are specialized in multimodal document analysis tasks and achieve SOTA results on them. The current [code](https://github.com/microsoft/unilm/blob/master/layoutlmv3/layoutlmft/models/layoutlmv3/modeling_layoutlmv3.py) for the LayoutLMV3 is implemented in huggingface format. Since the code is already more similar to the Huggingface format, I am not sure whether integrating this model into this repo is necessary, but there will be an ease of use if it is available in the library itself. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Model Implementation: https://github.com/microsoft/unilm/tree/master/layoutlmv3 Model Weights: https://huggingface.co/microsoft/layoutlmv3-base Authors: Yupan Huang,Tengchao Lv, Lei Cui, Yutong Lu ,Furu Wei
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16914/reactions", "total_count": 13, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16914/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16913
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16913/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16913/comments
https://api.github.com/repos/huggingface/transformers/issues/16913/events
https://github.com/huggingface/transformers/pull/16913
1,213,773,314
PR_kwDOCUB6oc42sGiK
16,913
Missing `f` prefix on f-strings fix
{ "login": "code-review-doctor", "id": 72647856, "node_id": "MDQ6VXNlcjcyNjQ3ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/72647856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/code-review-doctor", "html_url": "https://github.com/code-review-doctor", "followers_url": "https://api.github.com/users/code-review-doctor/followers", "following_url": "https://api.github.com/users/code-review-doctor/following{/other_user}", "gists_url": "https://api.github.com/users/code-review-doctor/gists{/gist_id}", "starred_url": "https://api.github.com/users/code-review-doctor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/code-review-doctor/subscriptions", "organizations_url": "https://api.github.com/users/code-review-doctor/orgs", "repos_url": "https://api.github.com/users/code-review-doctor/repos", "events_url": "https://api.github.com/users/code-review-doctor/events{/privacy}", "received_events_url": "https://api.github.com/users/code-review-doctor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
Fixes #16911
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16913/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16913", "html_url": "https://github.com/huggingface/transformers/pull/16913", "diff_url": "https://github.com/huggingface/transformers/pull/16913.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16913.patch", "merged_at": 1650914100000 }
https://api.github.com/repos/huggingface/transformers/issues/16912
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16912/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16912/comments
https://api.github.com/repos/huggingface/transformers/issues/16912/events
https://github.com/huggingface/transformers/pull/16912
1,213,709,848
PR_kwDOCUB6oc42r6LE
16,912
TF: XLA logits processors - minimum length, forced eos, and forced bos
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
MEMBER
null
# What does this PR do? (Review after https://github.com/huggingface/transformers/pull/16899) A few more XLA-compatible logits processors -- minimum length, forced eos, and forced bos. Only the first one needed changes, mostly to avoid needless retracing (it actually compiled without changes but would trigger a retrace at iteration, which would be super slow). After this PR, the only remaining processors are the bad words and ngrams ones.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16912", "html_url": "https://github.com/huggingface/transformers/pull/16912", "diff_url": "https://github.com/huggingface/transformers/pull/16912.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16912.patch", "merged_at": 1650911273000 }
https://api.github.com/repos/huggingface/transformers/issues/16911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16911/comments
https://api.github.com/repos/huggingface/transformers/issues/16911/events
https://github.com/huggingface/transformers/issues/16911
1,213,466,411
I_kwDOCUB6oc5IVAcr
16,911
Missing `f` prefix on f-strings
{ "login": "code-review-doctor", "id": 72647856, "node_id": "MDQ6VXNlcjcyNjQ3ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/72647856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/code-review-doctor", "html_url": "https://github.com/code-review-doctor", "followers_url": "https://api.github.com/users/code-review-doctor/followers", "following_url": "https://api.github.com/users/code-review-doctor/following{/other_user}", "gists_url": "https://api.github.com/users/code-review-doctor/gists{/gist_id}", "starred_url": "https://api.github.com/users/code-review-doctor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/code-review-doctor/subscriptions", "organizations_url": "https://api.github.com/users/code-review-doctor/orgs", "repos_url": "https://api.github.com/users/code-review-doctor/repos", "events_url": "https://api.github.com/users/code-review-doctor/events{/privacy}", "received_events_url": "https://api.github.com/users/code-review-doctor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,650
1,650
1,650
CONTRIBUTOR
null
Some strings looks like they're meant to be f-strings but are missing the `f` prefix meaning variable interpolation won't happen. https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/utils/hub.py#L751 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/configuration_utils.py#L637 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/pipelines/audio_utils.py#L58 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/pipelines/audio_utils.py#L147 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/auto/feature_extraction_auto.py#L310 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/xglm/modeling_flax_xglm.py#L156 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/plbart/modeling_plbart.py#L1025 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py#L132 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/bart/modeling_bart.py#L1053 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/src/transformers/models/prophetnet/modeling_prophetnet.py#L760 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/tests/extended/test_trainer_ext.py#L269 https://github.com/huggingface/transformers/blob/d91841315aab55cf1347f4eb59332858525fad0f/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L642 I found this issue automatically. I'm a bot. Beep Boop 🦊. See other issues I found in your repo [here](https://codereview.doctor/huggingface/transformers)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16911/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16910/comments
https://api.github.com/repos/huggingface/transformers/issues/16910/events
https://github.com/huggingface/transformers/issues/16910
1,213,442,109
I_kwDOCUB6oc5IU6g9
16,910
push_to_hub on custom tokenizer causing a flood of "deadlock" messages
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.9.10 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @SaulLu @Narsil @n1t0 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Reproduction code available here: [slides.ipynb](https://github.com/cakiki/huggingface-intro/blob/11db121d492762362bc5e1637950e2e269571a0d/slides.ipynb) if you scroll down to `🤗 Tokenizers` **TL;DR:** I'm training a custom rust `WordPiece` tokenizer, wrapping that into a `PreTrainedTokenizerFast` and calling `.push_to_hub` on that. The push succeeds but I get the `The current process just got forked, after parallelism has already been used.` message, even when I don't use the tokenizer. I've tried: setting the `TOKENIZERS_PARALLELISM` to false -> all jupyter notebook cells that need the internet hang (including `push_to_hub`) setting the `TOKENIZERS_PARALLELISM` to true -> only `push_to_hub` hangs. ### Expected behavior ```shell N/A ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16909/comments
https://api.github.com/repos/huggingface/transformers/issues/16909/events
https://github.com/huggingface/transformers/issues/16909
1,213,405,039
I_kwDOCUB6oc5IUxdv
16,909
Mask Token Spacing in RobertaTokenizer
{ "login": "mpoemsl", "id": 37959974, "node_id": "MDQ6VXNlcjM3OTU5OTc0", "avatar_url": "https://avatars.githubusercontent.com/u/37959974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mpoemsl", "html_url": "https://github.com/mpoemsl", "followers_url": "https://api.github.com/users/mpoemsl/followers", "following_url": "https://api.github.com/users/mpoemsl/following{/other_user}", "gists_url": "https://api.github.com/users/mpoemsl/gists{/gist_id}", "starred_url": "https://api.github.com/users/mpoemsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mpoemsl/subscriptions", "organizations_url": "https://api.github.com/users/mpoemsl/orgs", "repos_url": "https://api.github.com/users/mpoemsl/repos", "events_url": "https://api.github.com/users/mpoemsl/events{/privacy}", "received_events_url": "https://api.github.com/users/mpoemsl/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello @mpoemsl ,\r\n\r\nSorry for the late reply. I don't think it's really a bug but I'd be interested to know why you used the decoding feature with Roberta (and Bert). :smile: ", "Hi @SaulLu, thanks for your reply! \r\n\r\nI have been working on a model that uses representations derived from Transformer language models (e.g. RoBERTa, BERT) and that is trained on annotations on word, token, and character level, so I had to do a lot of mapping between those granularities and was inspecting them with `decode`.\r\n\r\nThe `RobertaTokenizer` has shown a lot of behavior that to me was unexpected, so I am mostly sticking with BERT for now. Another example is the `add_prefix_space=True` behavior:\r\n\r\n```python\r\nfrom transformers import RobertaTokenizer\r\n\r\ntk = RobertaTokenizer.from_pretrained(\"roberta-base\", add_prefix_space=True)\r\nenc = tk([\"Testing\", \"spacing\", \"lorem\", \"ipsum\", \"abc\"], return_tensors=\"pt\", is_split_into_words=True)\r\ndec = tk.batch_decode(enc.input_ids.T)\r\n\r\nprint(dec)\r\n```\r\n\r\nOutput is:\r\n\r\n```\r\n['<s>', ' Testing', ' spacing', ' lore', 'm', ' ', 'ips', 'um', ' ab', 'c', '</s>']\r\n```\r\n\r\nWhat is surprising to me here is that it inserted a literal space token before `'ips'` and not a space-prefixed `' ips'`. It probably has to with whether the space-prefixed token is in the vocab or not.\r\n\r\nI agree that those are just quirks of RoBERTa and not really bugs, but should this behavior perhaps be noted in the documentation somewhere? The [current docs on RobertaTokenizer](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaTokenizer) do not mention the distinction of whether the space-prefixed version of a token is in the vocab and they do not mention the spacing behavior of special tokens.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,656
1,656
CONTRIBUTOR
null
### System Info - `transformers` version: 4.18.0 - Platform: Linux-5.16.13-200.fc35.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.2 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import RobertaTokenizer, BertTokenizer bert_tk = BertTokenizer.from_pretrained("bert-base-cased") roberta_tk = RobertaTokenizer.from_pretrained("roberta-base") enc_bert = bert_tk("Testing the spacing of " + bert_tk.pad_token + " and " + bert_tk.mask_token + " tokens.") enc_roberta = roberta_tk("Testing the spacing of " + roberta_tk.pad_token + " and " + roberta_tk.mask_token + " tokens.") dec_bert = bert_tk.decode(enc_bert.input_ids, skip_special_tokens=False) dec_roberta = roberta_tk.decode(enc_roberta.input_ids, skip_special_tokens=False) print("bert", dec_bert) print("roberta", dec_roberta) ``` Output is: ``` bert [CLS] Testing the spacing of [PAD] and [MASK] tokens. [SEP] roberta <s>Testing the spacing of <pad> and<mask> tokens.</s> ``` ### Expected behavior I expected Roberta's `<mask>` token to be surrounded by spaces as it is done with the `<pad>` token. In BERT, this is done the same way for both special tokens - why not in Roberta? If someone confirms that this is indeed a bug and not expected behavior, I would be happy to try to get to the root cause myself.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16909/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16909/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16908/comments
https://api.github.com/repos/huggingface/transformers/issues/16908/events
https://github.com/huggingface/transformers/issues/16908
1,213,333,845
I_kwDOCUB6oc5IUgFV
16,908
Name of LayerNorm parameter in RobertaLMHead.
{ "login": "silencio94", "id": 40610160, "node_id": "MDQ6VXNlcjQwNjEwMTYw", "avatar_url": "https://avatars.githubusercontent.com/u/40610160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silencio94", "html_url": "https://github.com/silencio94", "followers_url": "https://api.github.com/users/silencio94/followers", "following_url": "https://api.github.com/users/silencio94/following{/other_user}", "gists_url": "https://api.github.com/users/silencio94/gists{/gist_id}", "starred_url": "https://api.github.com/users/silencio94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silencio94/subscriptions", "organizations_url": "https://api.github.com/users/silencio94/orgs", "repos_url": "https://api.github.com/users/silencio94/repos", "events_url": "https://api.github.com/users/silencio94/events{/privacy}", "received_events_url": "https://api.github.com/users/silencio94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
NONE
null
As you know, current training codes using `weight decay`, usually detect target parameters based on their names. e.g.) ```javascript no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ {"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": args.weight_decay}, {"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0}, ] ``` In the code below ,https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/models/roberta/modeling_roberta.py#L696, Layernorm module defined as "layer_norm". Hence "layer_norm" naming, weight-decay will be applied to RobertaLMHead.layer_norm, different from intent. ```javascript class RobertaLMHead(nn.Module): """Roberta Head for masked language modeling.""" def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.decoder = nn.Linear(config.hidden_size, config.vocab_size) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) self.decoder.bias = self.bias def forward(self, features, **kwargs): x = self.dense(features) x = gelu(x) x = self.layer_norm(x) # project back to size of vocabulary with bias x = self.decoder(x) return x def _tie_weights(self): # To tie those two weights if they get disconnected (on TPU or when the bias is resized) self.bias = self.decoder.bias ``` Is there any special reason? I don't think it will cause a significant impact in training model, but I wonder what intention it was.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16908/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16907/comments
https://api.github.com/repos/huggingface/transformers/issues/16907/events
https://github.com/huggingface/transformers/pull/16907
1,213,328,832
PR_kwDOCUB6oc42q5nY
16,907
[WIP] Enable reproducibility for distributed trainings
{ "login": "hasansalimkanmaz", "id": 49716619, "node_id": "MDQ6VXNlcjQ5NzE2NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasansalimkanmaz", "html_url": "https://github.com/hasansalimkanmaz", "followers_url": "https://api.github.com/users/hasansalimkanmaz/followers", "following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}", "gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions", "organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs", "repos_url": "https://api.github.com/users/hasansalimkanmaz/repos", "events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for working on this, that's an important feature! So as to not introduce a breaking change, and for clarity of the API, I'd personally vouch for not adding the `enable_determinism` flag to the `set_seed` method.\r\n> \r\n> From the title of the method I understand it should set the seed, and that's it. I don't think it should do anything else. However, the `enable_determinism_for_distributed_training` method likely needs the seed to be set in order to benefit from full determinism, so I'd even push to have the `set_seed` method called inside the `enable_determinism_for_distributed_training`, adding a `seed` argument to that last method.\r\n> \r\n> What do you think?\r\n\r\nI like this idea. I can implement it after we reach a conclusion on it, however, it is not clear to me how to implement it. Could you point me to which parts of the code I need to change/pay attention not to break anything if we decide to go for this idea? ", "@sgugger Thanks for the pointers and sorry for not being so clear. I would like to know in which places `enable_full_determinism` should be called. Currently, `set_seed` is called several places in the codebase. I don't think these calls will be replaced with `enable_full_determinism`. \r\n\r\nWith the latest commits, I have already addressed your pointers. Now I am waiting your feedback for where to call `enable_full_determinism` in the codebase. It is not called any place in the codebase right now.", "There can be an added flag in the `TrainingArguments` and we can call this function instead of `set_seed` in the `Trainer`. Otherwise it will be for the users to use this one instead of `set_seed` in their own scripts (you should make it accessible in the main init by the way!)", "@sgugger I think I have addressed all your comments. Is there anything that needs to be done for this PR?", "Is it normal that 3 tests fail suddenly after a commit in a docstring? I couldn't understand why tests are failing.", "Those are just flaky, no link to your PR. Thanks again for all your work on this!", "@sgugger @hasansalimkanmaz I had a question about this PR - why is it necessary to set `CUDA_LAUNCH_BLOCKING`? This disables asynchronous execution of CUDA programs, but the cuda/pytorch docs don't mention it necessary for deterministic training? I do use it to get the \"true\" stack trace when there are device-side asserts but was wondering what role it plays in deterministic training. Many thanks!\r\n", "@alexcoca It's required to make some CUDA algorithms deterministic if the CUDA version is older than 10.2. I suppose it could be replaced by a CUDA version check somehow, and only using it if it's an old version?", "@saattrupdan I would go for this approach, because running the CUDA programs in asynchronous mode will definitely slow things down beyond belief. I implemented this PR myself without the `CUDA_LAUNCH_BLOCKING` setting and will report if I manage to preserve determinism.", "I experimented with training a dialogue state tracking model on the SGD corpus starting from Google's v1.1 T5 (220M) paramaters. I allowed the model to train for roughly two epochs and evaluated task oriented performance every 2k steps (max train steps was 12k). \r\n\r\nRan 4 experiments: 2 in which I set the seed, and an additional 2 where I do roughly the same as `ensure_determinism` except setting `CUDA_LAUNCH_BLOCKING`. I also set `CUBLAS_WORKSPACE_CONFIG=':4096:8'`. Each experiment was trained on 2 A100-80GB with `cuda/11.4 openmpi/4.1.1/gcc-9.4.0-epagguv`, `pytorch 1.10` and transformers `4.19.2`. You can see below that I was able to reproduce the metrics in all runs and with no major performance hits. I guess that convolution benchmarking and non-det ops are less relevant for T5. With `4.18.0` the performance was wreaking havoc on the same seed, sign that the data ordering was the culprit. \r\n\r\n![image](https://user-images.githubusercontent.com/30216068/169503692-e85d8e0d-a592-422f-97c9-e1f0d26f1d25.png)\r\n![image](https://user-images.githubusercontent.com/30216068/169505839-580ddbe7-f6b5-4fa4-93e7-826d71fa359a.png)\r\n\r\nI guess the moral of the story here is that one could:\r\n\r\n- Check CUDA version to avoid running in blocking mode when not necessary\r\n- potentially allow the user to specify which `CUBLAS_WORKSPACE_CONFIG` as `:16:8` may impact performance (see [here](https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility))#16907 \r\n\r\n@sgugger ?", "Agreed for the first one. For the second one, we could avoid overriding an existing `CUBLAS_WORKSPACE_CONFIG` if it's already in the env? In all cases, it should be clearly stated in the doc of the flag that triggers the full reproducibility that it comes at a performance price.", "Yes, I agree with the above! I'm at ACL next week but I'll try and open a small PR to address this the week after!\r\n", "Thanks, @alexcoca for noticing this and for your time." ]
1,650
1,653
1,652
CONTRIBUTOR
null
# What does this PR do? This PR ensures reproducibility for distributed trainings by setting seed for worker in dataloader and setting environment variables for cuda. This PR is motivated by [this issue](https://github.com/huggingface/transformers/issues/16549#). ## Who can review? @saattrupdan @sgugger I am looking forward to your feedback
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16907/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16907/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16907", "html_url": "https://github.com/huggingface/transformers/pull/16907", "diff_url": "https://github.com/huggingface/transformers/pull/16907.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16907.patch", "merged_at": 1652276234000 }
https://api.github.com/repos/huggingface/transformers/issues/16906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16906/comments
https://api.github.com/repos/huggingface/transformers/issues/16906/events
https://github.com/huggingface/transformers/pull/16906
1,213,271,025
PR_kwDOCUB6oc42qwex
16,906
Add missing whitespaces in RuntimeError message
{ "login": "ftnext", "id": 21273221, "node_id": "MDQ6VXNlcjIxMjczMjIx", "avatar_url": "https://avatars.githubusercontent.com/u/21273221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftnext", "html_url": "https://github.com/ftnext", "followers_url": "https://api.github.com/users/ftnext/followers", "following_url": "https://api.github.com/users/ftnext/following{/other_user}", "gists_url": "https://api.github.com/users/ftnext/gists{/gist_id}", "starred_url": "https://api.github.com/users/ftnext/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftnext/subscriptions", "organizations_url": "https://api.github.com/users/ftnext/orgs", "repos_url": "https://api.github.com/users/ftnext/repos", "events_url": "https://api.github.com/users/ftnext/events{/privacy}", "received_events_url": "https://api.github.com/users/ftnext/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16905 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> --- Add whitespace in the message ```python >>> from transformers import pipeline >>> pipeline(task=None, model=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/.../venv/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 495, in pipeline raise RuntimeError( RuntimeError: Impossible to instantiate a pipeline without either a task or a model being specified. Please provide a task class or a model ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16906/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16906", "html_url": "https://github.com/huggingface/transformers/pull/16906", "diff_url": "https://github.com/huggingface/transformers/pull/16906.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16906.patch", "merged_at": 1651007308000 }
https://api.github.com/repos/huggingface/transformers/issues/16905
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16905/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16905/comments
https://api.github.com/repos/huggingface/transformers/issues/16905/events
https://github.com/huggingface/transformers/issues/16905
1,213,269,668
I_kwDOCUB6oc5IUQak
16,905
Missing whitespaces at RuntimeError message
{ "login": "ftnext", "id": 21273221, "node_id": "MDQ6VXNlcjIxMjczMjIx", "avatar_url": "https://avatars.githubusercontent.com/u/21273221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftnext", "html_url": "https://github.com/ftnext", "followers_url": "https://api.github.com/users/ftnext/followers", "following_url": "https://api.github.com/users/ftnext/following{/other_user}", "gists_url": "https://api.github.com/users/ftnext/gists{/gist_id}", "starred_url": "https://api.github.com/users/ftnext/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftnext/subscriptions", "organizations_url": "https://api.github.com/users/ftnext/orgs", "repos_url": "https://api.github.com/users/ftnext/repos", "events_url": "https://api.github.com/users/ftnext/events{/privacy}", "received_events_url": "https://api.github.com/users/ftnext/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,650
1,651
1,651
CONTRIBUTOR
null
Trailing whitespaces are missing, https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/pipelines/__init__.py#L494-L499 so the error message is a little hard to read. ```python >>> from transformers import pipeline >>> pipeline(task=None, model=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/.../venv/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 495, in pipeline raise RuntimeError( RuntimeError: Impossible to instantiate a pipeline without either a task or a modelbeing specified.Please provide a task class or a model ``` - Python 3.9.4 - transformers 4.18.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16905/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16904/comments
https://api.github.com/repos/huggingface/transformers/issues/16904/events
https://github.com/huggingface/transformers/issues/16904
1,213,266,807
I_kwDOCUB6oc5IUPt3
16,904
Source link of transformers.pipeline is broken
{ "login": "ftnext", "id": 21273221, "node_id": "MDQ6VXNlcjIxMjczMjIx", "avatar_url": "https://avatars.githubusercontent.com/u/21273221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftnext", "html_url": "https://github.com/ftnext", "followers_url": "https://api.github.com/users/ftnext/followers", "following_url": "https://api.github.com/users/ftnext/following{/other_user}", "gists_url": "https://api.github.com/users/ftnext/gists{/gist_id}", "starred_url": "https://api.github.com/users/ftnext/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftnext/subscriptions", "organizations_url": "https://api.github.com/users/ftnext/orgs", "repos_url": "https://api.github.com/users/ftnext/repos", "events_url": "https://api.github.com/users/ftnext/events{/privacy}", "received_events_url": "https://api.github.com/users/ftnext/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger @mishig25 ", "`doc-builder` doesn't like objects defined in `__init__`, will send a fix.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
CONTRIBUTOR
null
This is about **documentation link error**. There is a issue on the page https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline <img width="550" alt="image" src="https://user-images.githubusercontent.com/21273221/164890499-54a79a1a-12c7-4776-bcf7-c39bf6f3efc1.png"> **Procedure** Click the link of `<source>`, then go to https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/pipelines.py#L372 but see 404 Page not found on GitHub. **Correct link** I found that the correct link is https://github.com/huggingface/transformers/blob/v4.18.0/src/transformers/pipelines/__init__.py#L372 --- It seems that the content is generated by https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/docs/source/en/main_classes/pipelines.mdx#L123
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16904/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16903/comments
https://api.github.com/repos/huggingface/transformers/issues/16903/events
https://github.com/huggingface/transformers/issues/16903
1,213,259,353
I_kwDOCUB6oc5IUN5Z
16,903
T5base memory usage for interface
{ "login": "Oxi84", "id": 25420033, "node_id": "MDQ6VXNlcjI1NDIwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Oxi84", "html_url": "https://github.com/Oxi84", "followers_url": "https://api.github.com/users/Oxi84/followers", "following_url": "https://api.github.com/users/Oxi84/following{/other_user}", "gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}", "starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions", "organizations_url": "https://api.github.com/users/Oxi84/orgs", "repos_url": "https://api.github.com/users/Oxi84/repos", "events_url": "https://api.github.com/users/Oxi84/events{/privacy}", "received_events_url": "https://api.github.com/users/Oxi84/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@Oxi84,\r\n\r\nCould you please provide us with a codesnippet that shows the error?", "@patrickvonplaten \r\n\r\nHere is it - it takes 1300 mb of GPU memory and the model size is just 850 mb. I tried all combinations, and 1300 mb is minimum. For example T5 large takes around 2000 MB, and ithe model size is 2.7 GB so it is more than 3X larger. \r\n\r\nI expected T5 base would not use more than 600 MB of GPU memory.\r\n\r\n\r\n from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast \r\n model1b = T5ForConditionalGeneration.from_pretrained(\"iarfmoose/t5-base-question-generator\",cache_dir=\"/root/Desktop/model_cache_tmp1/\")\r\n model1b.eval()\r\n model1b.half()\r\n model1b.to(\"cuda\") \r\n\r\n\r\n", "Good thing is that when i load base and large model together it takes 2.7Gb of memory, while only base is 1.3Gb and only large is 2.3GB. Also adding additional base models seem to take just 200 MB. I suppose there is some kind of general cache you have implemented for all T5 models together.\r\n\r\n\r\n from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast \r\n model1ba = T5ForConditionalGeneration.from_pretrained(\"t5-base\",cache_dir=\"/root/Desktop/model_cache_tmp1/\")\r\n model1ba.eval()\r\n model1ba.half()\r\n model1ba.to(\"cuda\") \r\n\r\n\r\n import torch\r\n from transformers import T5ForConditionalGeneration,T5Tokenizer,T5TokenizerFast \r\n model1b = T5ForConditionalGeneration.from_pretrained(\"t5-large\",cache_dir=\"/root/Desktop/model_cache_tmp1/\")\r\n model1b.eval()\r\n model1b.half()\r\n model1b.to(\"cuda\") \r\n input(11)\r\n", "Hey @Oxi84,\r\n\r\nNote that some GPU memory is always taken up by PyTorch itself. See thread here: https://github.com/huggingface/transformers/pull/16881#issuecomment-1106576937", "Thanks.\r\nIt is awesome idea to merge all the models on one flask file/server, this saved me 1GB of memory.\r\ni use gc collect and cuda empty cache to remove things from memory after each interface and a certain number of batches if not after each..", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,654
1,654
NONE
null
### System Info ```shell Ubuntu 18.04 RTX 2080 ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Just load t5 base from any example. ### Expected behavior Is it normal that T5 base uses around 1000 MB of GPU memory in half precision (FP-16) for interface. Many other models use roughly 50 percent of FP-32 model size for FP-16 which should be only 400-500 MB. I read here that the model https://huggingface.co/google/t5-efficient-small-el16 is 350 mb in size so it uses roughly 50% of the size (175 MB) when used for interface. Is there anything to do to reduce the memory usage. T5-large for example also in Fp16 uses around 2GB which is much better. I expected memory usage to be 50% of model size for fp 16.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16903/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16902/comments
https://api.github.com/repos/huggingface/transformers/issues/16902/events
https://github.com/huggingface/transformers/pull/16902
1,213,201,654
PR_kwDOCUB6oc42qk8f
16,902
migrate azure blob for beit checkpoints
{ "login": "donglixp", "id": 1070872, "node_id": "MDQ6VXNlcjEwNzA4NzI=", "avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donglixp", "html_url": "https://github.com/donglixp", "followers_url": "https://api.github.com/users/donglixp/followers", "following_url": "https://api.github.com/users/donglixp/following{/other_user}", "gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}", "starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donglixp/subscriptions", "organizations_url": "https://api.github.com/users/donglixp/orgs", "repos_url": "https://api.github.com/users/donglixp/repos", "events_url": "https://api.github.com/users/donglixp/events{/privacy}", "received_events_url": "https://api.github.com/users/donglixp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,652
1,652
CONTRIBUTOR
null
## Motivation We are going to use a new blob account to store the checkpoints. ## Modification Modify the azure blob storage URLs for BEiT checkpoints. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16902/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16902", "html_url": "https://github.com/huggingface/transformers/pull/16902", "diff_url": "https://github.com/huggingface/transformers/pull/16902.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16902.patch", "merged_at": 1652353695000 }
https://api.github.com/repos/huggingface/transformers/issues/16901
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16901/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16901/comments
https://api.github.com/repos/huggingface/transformers/issues/16901/events
https://github.com/huggingface/transformers/pull/16901
1,213,071,759
PR_kwDOCUB6oc42qNex
16,901
[WIP] Avoid BERT `attention_mask` from promoting dtype in self-attention `forward()`
{ "login": "awgu", "id": 31054793, "node_id": "MDQ6VXNlcjMxMDU0Nzkz", "avatar_url": "https://avatars.githubusercontent.com/u/31054793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awgu", "html_url": "https://github.com/awgu", "followers_url": "https://api.github.com/users/awgu/followers", "following_url": "https://api.github.com/users/awgu/following{/other_user}", "gists_url": "https://api.github.com/users/awgu/gists{/gist_id}", "starred_url": "https://api.github.com/users/awgu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awgu/subscriptions", "organizations_url": "https://api.github.com/users/awgu/orgs", "repos_url": "https://api.github.com/users/awgu/repos", "events_url": "https://api.github.com/users/awgu/events{/privacy}", "received_events_url": "https://api.github.com/users/awgu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
NONE
null
### Overview ~The `attention_mask` passed into `BertSelfAttention.forward()` is taken from the BERT model's first parameter dtype at **construction time**.~ <details> <summary>Code Pointers</summary> https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/models/bert/modeling_bert.py#L985 https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L655 https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L556-L561 https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/modeling_utils.py#L123-L125 </details> However, if a user only casts down the input or parameter dtype(s) (e.g. to `torch.float16`) **after** construction time and before running `forward()`, the `extended_attention_mask` will still have dtype `torch.float32` rather than the reduced precision dtype. Due to PyTorch [type promotion semantics](https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc), `attention_scores = attention_scores + attention_mask` will promote `attention_scores` to `torch.float32` and propagate through the remainder of the forward pass, defeating the intention of the reduced precision. https://github.com/huggingface/transformers/blob/22fc93c4d9608fa9cd171b4f3044f8c756f86773/src/transformers/models/bert/modeling_bert.py#L343 In order to avoid this behavior, the model parameters must be cast to the reduced precision at construction time, which is restrictive. This PR ensures that the addition does not change the dtype. **Before** `attention_scores`: FP16 `attention_mask`: FP32 ==> FP32 `attention_scores`: FP32 `attention_mask`: FP16 ==> FP32 **After** `attention_scores`: FP16 `attention_mask`: FP32 -> FP16 ==> FP16 `attention_scores`: FP32 `attention_mask`: FP16 -> FP32 ==> FP32 (Hence, we see that the reverse setting has no change in behavior, while for the desired setting, we prevent the unwanted type promotion.) ### Test Plan I ran `tests/bert/test_modeling_bert.py` locally with 4 GPUs and 68 passed and 11 skipped. Since this is a small change, direct inspection may be more valuable.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16901/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16901", "html_url": "https://github.com/huggingface/transformers/pull/16901", "diff_url": "https://github.com/huggingface/transformers/pull/16901.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16901.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16900/comments
https://api.github.com/repos/huggingface/transformers/issues/16900/events
https://github.com/huggingface/transformers/pull/16900
1,212,762,040
PR_kwDOCUB6oc42pTS8
16,900
Add missing ckpt in config docs
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for fixing all of those! It would be awesome to have some kind of quality script to check we don't introduce new faulty checkpoints.\r\n\r\nYes, I do have some (draft) check locally. I plan to add it in another PR (unless it's necessary to do so in this PR).", "Thank you @NielsRogge I should try to use the correct names, as defined in `MODEL_NAMES_MAPPING`.", "> Thanks a lot for this PR, awesome that this gets improved.\r\n> \r\n> Left some comments, just for consistency, I would always use the template:\r\n> \r\n> > \"will yield a similar configuration of that of the - snake-cased model name - [checkpoint name](link) architecture\".\r\n\r\nI will add this to the check I currently have (locally, but will push to another PR), thanks!", "Merge now. Thanks for the review.\r\n\r\nWith this PR, all configs are good except the following (which are expected, since those composite models don't have full default config arguments - they rely on the encoder and decoder configs.)\r\n\r\n- DecisionTransformerConfig\r\n- VisionEncoderDecoderConfig\r\n- VisionTextDualEncoderConfig\r\n- CLIPConfig\r\n- SpeechEncoderDecoderConfig\r\n- EncoderDecoderConfig\r\n- RagConfig\r\n\r\n" ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? As discussed on Slack, I worked on the `Config` files to add missing information about checkpoints, or correct them. - I tried to check the mentioned checkpoints are actually on the Hub - also tried to make sure the checkpoints are for the target architecture - I didn't verify the statement `Instantiating a configuration with the defaults will yield a similar configuration to that of the Speech2Text2 [mentioned checkpoint]` - in particular, the hyperparameters like `hidden_dim`, `num_layers` might be different - it says `similar`, so I think it is fine (..?) @patrickvonplaten Could you take a look on the speech models? @NielsRogge Could you take a look on the vision models?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16900", "html_url": "https://github.com/huggingface/transformers/pull/16900", "diff_url": "https://github.com/huggingface/transformers/pull/16900.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16900.patch", "merged_at": 1650900705000 }
https://api.github.com/repos/huggingface/transformers/issues/16899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16899/comments
https://api.github.com/repos/huggingface/transformers/issues/16899/events
https://github.com/huggingface/transformers/pull/16899
1,212,659,459
PR_kwDOCUB6oc42o9Z3
16,899
TF: XLA Logits Warpers
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@patrickvonplaten Sorry, I know you've already reviewed this, but I'm going to re-request your review. I realized the tests were much easier to understand (and with fewer lines) if they were parametrized, instead of having two tests (one for XLA, another for non-XLA) with shared code 😅 " ]
1,650
1,657
1,650
MEMBER
null
# What does this PR do? This PR enables XLA on the logits warpers... which actually needed no changes. In essence, it adds XLA tests to ensure we don't regress.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16899/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16899/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16899", "html_url": "https://github.com/huggingface/transformers/pull/16899", "diff_url": "https://github.com/huggingface/transformers/pull/16899.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16899.patch", "merged_at": 1650912488000 }
https://api.github.com/repos/huggingface/transformers/issues/16898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16898/comments
https://api.github.com/repos/huggingface/transformers/issues/16898/events
https://github.com/huggingface/transformers/issues/16898
1,212,658,220
I_kwDOCUB6oc5IR7Is
16,898
ValueError: cannot find context for 'fork' when processor_with_lm.batch_decode(_logits)
{ "login": "elsheikh21", "id": 26064109, "node_id": "MDQ6VXNlcjI2MDY0MTA5", "avatar_url": "https://avatars.githubusercontent.com/u/26064109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elsheikh21", "html_url": "https://github.com/elsheikh21", "followers_url": "https://api.github.com/users/elsheikh21/followers", "following_url": "https://api.github.com/users/elsheikh21/following{/other_user}", "gists_url": "https://api.github.com/users/elsheikh21/gists{/gist_id}", "starred_url": "https://api.github.com/users/elsheikh21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elsheikh21/subscriptions", "organizations_url": "https://api.github.com/users/elsheikh21/orgs", "repos_url": "https://api.github.com/users/elsheikh21/repos", "events_url": "https://api.github.com/users/elsheikh21/events{/privacy}", "received_events_url": "https://api.github.com/users/elsheikh21/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Related https://github.com/woven-planet/l5kit/issues/129", "Hey @elsheikh21,\r\n\r\nLet's try to narrow the bug further down :-) \r\n\r\nDoes the following work for you: \r\n\r\n```python\r\nfrom multiprocessing import get_context\r\npool = get_context(\"fork\").Pool(num_processes)\r\npool.close()\r\n```\r\n\r\n?\r\n", "Hello @patrickvonplaten \r\n\r\nI have tried to run \r\n```python\r\nfrom multiprocessing import get_context\r\nnum_processes = 8\r\npool = get_context(\"fork\").Pool(num_processes)\r\npool.close()\r\n```\r\n\r\nand got the following traceback\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\AhmedElSheikh\\AppData\\Local\\Programs\\Python\\Python38\\lib\\multiprocessing\\context.py\", line 239, in get_context\r\n return super().get_context(method)\r\n File \"C:\\Users\\AhmedElSheikh\\AppData\\Local\\Programs\\Python\\Python38\\lib\\multiprocessing\\context.py\", line 193, in get_context\r\n raise ValueError('cannot find context for %r' % method) from None\r\nValueError: cannot find context for 'fork'\r\n```\r\n\r\nSystem Information\r\n`Windows 11`\r\n`Python 3.8.10`", "> Related [woven-planet/l5kit#129](https://github.com/woven-planet/l5kit/issues/129)\r\n\r\nI have read this thread, yet the error itself occurs when I call processor.batch_decode and I am working on the project not just to be used on my local device only", "> Hello @patrickvonplaten\r\n> \r\n> I have tried to run\r\n> \r\n> ```python\r\n> from multiprocessing import get_context\r\n> num_processes = 8\r\n> pool = get_context(\"fork\").Pool(num_processes)\r\n> pool.close()\r\n> ```\r\n> \r\n> and got the following traceback\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"C:\\Users\\AhmedElSheikh\\AppData\\Local\\Programs\\Python\\Python38\\lib\\multiprocessing\\context.py\", line 239, in get_context\r\n> return super().get_context(method)\r\n> File \"C:\\Users\\AhmedElSheikh\\AppData\\Local\\Programs\\Python\\Python38\\lib\\multiprocessing\\context.py\", line 193, in get_context\r\n> raise ValueError('cannot find context for %r' % method) from None\r\n> ValueError: cannot find context for 'fork'\r\n> ```\r\n> \r\n> System Information `Windows 11` `Python 3.8.10`\r\n\r\nThis seems to be the error then.\r\n\r\nCould you try to replace `\"fork\"` with `\"spawn\"`? ", "If `\"spawn\"` works then it might make most sense to just update `\"fork\"` to `\"spawn\"` ", "I have tried to run with `\"spawn\"` and it works fine, but in that case I will need to change the file `transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.py` and I guess that wont work when I run the same code on another machine, is there a way to force `\"spawn\"` when `\"fork\"` does not work?", "Think we can just replace `\"fork\"` with `\"spawn\"` - do you want to open a PR to fix it? :-)", "> Think we can just replace `\"fork\"` with `\"spawn\"` - do you want to open a PR to fix it? :-)\r\n\r\nYes, I would happily do that, I guess it would be something along those lines? please feel free to modify my approach. Otherwise I will start reading about collaborating and how to open PR \r\n\r\n```python\r\ntry:\r\n pool = get_context(\"fork\").Pool(num_processes)\r\nexcept ValueError as exc:\r\n if \"cannot find context for 'fork'\" in exc:\r\n pool = get_context(\"spawn\").Pool(num_processes)\r\n logging.info(\"Switching to \\\"spawn\\\" as \\\"fork\\\" context is not found\")\r\n```", "I think we can actually just change `\"fork\"` to `\"spawn\"` (no need for a try, ... expect IMO). According to https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn and some other docs, `\"spawn\"` is safe and given that the child process is LM-boosted decoding (which is always slow), doing the switch should be fine", "> I think we can actually just change `\"fork\"` to `\"spawn\"` (no need for a try, ... expect IMO). According to https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn and some other docs, `\"spawn\"` is safe and given that the child process is LM-boosted decoding (which is always slow), doing the switch should be fine\r\n\r\n\r\nOkay let us do it your way then, I have also created a custom dataset loader (from flac/wav audio files) and model finetuner, evaluator if those can be helpful for the community I would love to share them as well\r\n\r\nFor now I will open a PR for `spawn` and `fork`", "Exactly same problem here, also trying to run this under Windows 10 and getting the same error, when in processing_wav2vec2_with_lm.py, line 316, gets \"fork\" from context.\r\nBut since I see it's already being fixed, I'll just thank and wait 👍 ", "> Exactly same problem here, also trying to run this under Windows 10 and getting the same error, when in processing_wav2vec2_with_lm.py, line 316, gets \"fork\" from context. But since I see it's already being fixed, I'll just thank and wait 👍\r\n\r\nas a quick fix you can replace \"fork\" with \"spawn\" in the line ` pool = get_context(\"fork\").Pool(num_processes)`, file `transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.py`\r\n", "@ADD-eNavarro @elsheikh21 sorry I don't work with Windows usually and am a bit buried with other issues. Regarding the PR please lemme know if anything isn't clear, happy trying to be more precise - in short I think we should try to apply the exact same solution that was applied in `pyctcdecode`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,658
1,658
NONE
null
### System Info ```shell ## Environment info - `transformers` version: 4.17.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.8.13 - PyTorch version (GPU?): 1.9.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## To reproduce - The model I am using (Wav2Vec2.0 Large XLS-R 53 English): - Steps to reproduce the behavior: 1. I am [fine-tuning Wav2Vec with LM Head](https://huggingface.co/blog/fine-tune-wav2vec2-english) using WikiText to produce 5-grams LM. I downloaded the fine-tuned model dir locally and was able to perform inference on my audio `.wav` file(s) 2. Please find [here](https://drive.google.com/drive/folders/1IBUTglXLw4IX8uKC0qmGKKhkoCvc3s94?usp=sharing), model files, test audio file, and requirements.txt if needed to reproduce the problem ### Code snippet ```{python} import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM from datasets import load_dataset import soundfile as sf model_name = "jonatasgrosman/wav2vec2-large-xlsr-53-english" model = Wav2Vec2ForCTC.from_pretrained(model_name) processor_path = path_join(getcwd(), "stt_assets", "stt_model") processor = Wav2Vec2ProcessorWithLM.from_pretrained(processor_path) dataset = load_dataset("timit_asr", split="test").shuffle().shuffle().select(range(100)) char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""}) def prepare_example(example): example["speech"], _ = sf.read(example["file"]) example["text"] = example["text"].translate(char_translations) example["text"] = " ".join(example["text"].split()) # clean up whitespace example["text"] = example["text"].lower() return example dataset = dataset.map(prepare_example, remove_columns=["file"]) pprint(dataset) features = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(**features).logits # logits shape is torch.Size([100, 304, 33]) transcription = processor.batch_decode(logits) # EXCEPTION IS RAISED in `processor.batch_decode()` ValueError: cannot find context for 'fork' print(transcription) ``` ### Expected behavior ``` What I am expecting is that I get a list of transcriptions from `processor.batch_decode()` but I get this `ValueError: cannot find context for 'fork'` Exception. I am using Windows 11, I have tried to research it and I guess it is something related to multiprocessing but I could not really figure out how to solve it yet ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16898/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16897/comments
https://api.github.com/repos/huggingface/transformers/issues/16897/events
https://github.com/huggingface/transformers/pull/16897
1,212,657,899
PR_kwDOCUB6oc42o9EW
16,897
Fix typos BigBird ONNX conversion
{ "login": "chainyo", "id": 50595514, "node_id": "MDQ6VXNlcjUwNTk1NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chainyo", "html_url": "https://github.com/chainyo", "followers_url": "https://api.github.com/users/chainyo/followers", "following_url": "https://api.github.com/users/chainyo/following{/other_user}", "gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}", "starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainyo/subscriptions", "organizations_url": "https://api.github.com/users/chainyo/orgs", "repos_url": "https://api.github.com/users/chainyo/repos", "events_url": "https://api.github.com/users/chainyo/events{/privacy}", "received_events_url": "https://api.github.com/users/chainyo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? I tried to convert one `BigBird` model to ONNX with the recent PR merged #16427 But it seems that there is a typo in the `src/transformers/onnx/features.py` file. I also fixed the typo in the test_v2 file. Here is the error I get while trying to convert `google/bigbird-roberta-base`. ```bash $ python -m transformers.onnx --model=google/bigbird-roberta-base onnx/ > KeyError: "big-bird is not supported yet. > Only ['albert', 'bart', 'mbart', 'bert', 'bigbird', > 'ibert', 'camembert', 'distilbert', 'flaubert', 'marian', 'm2m-100', 'roberta', 't5', > 'xlm-roberta', 'gpt2', 'gptj', 'gpt-neo', 'layoutlm', 'electra', 'vit', 'beit', 'blenderbot', > 'blenderbot-small', 'data2vec-text'] are supported. > If you want to support big-bird please propose a PR or open up an issue." ``` As you can see in the error `bigbird` should be `big-bird`. ping @lewtun @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16897/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16897", "html_url": "https://github.com/huggingface/transformers/pull/16897", "diff_url": "https://github.com/huggingface/transformers/pull/16897.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16897.patch", "merged_at": 1650879126000 }
https://api.github.com/repos/huggingface/transformers/issues/16896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16896/comments
https://api.github.com/repos/huggingface/transformers/issues/16896/events
https://github.com/huggingface/transformers/pull/16896
1,212,574,829
PR_kwDOCUB6oc42osXC
16,896
MobileBERT tokenizer tests
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Obviously - thanks!", "Hi, @leondz \r\n\r\nThe main branch has recently merged a PR that changes test folders, like \r\n```\r\ntests/mobilebert -> tests/models/mobilebert\r\n```\r\nCould you follow the ideas shown in the instructions in [this](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) to incorporate the changes into your working branch. Thank you. (You might need to fix a few import places)\r\n", "> Hi, @leondz\r\n> \r\n> The main branch has recently merged a PR that changes test folders, like\r\n> \r\n> ```\r\n> tests/mobilebert -> tests/models/mobilebert\r\n> ```\r\n> \r\n> Could you follow the ideas shown in the instructions in [this](https://github.com/huggingface/transformers/pull/17008#issuecomment-1116059265) to incorporate the changes into your working branch. Thank you. (You might need to fix a few import places)\r\n\r\nThanks for this, it makes sense. By the way, `make fixup` seems to adjust content in /examples and /docs in a way that looks mistaken - out of scope for this PR but is that something to be looked at?\r\n\r\ne.g.\r\n```\r\n\r\n--- a/docs/source/en/model_doc/bert-generation.mdx\r\n+++ b/docs/source/en/model_doc/bert-generation.mdx\r\n@@ -49,7 +49,7 @@ Usage:\r\n \r\n >>> input_ids = tokenizer(\r\n ... \"This is a long article to summarize\", add_special_tokens=False, return_tensors=\"pt\"\r\n-... ).input_ids\r\n+>>> ).input_ids\r\n >>> labels = tokenizer(\"This is a short summary\", return_tensors=\"pt\").input_ids\r\n \r\n\r\n```", "Could you check this comment\r\n\r\nhttps://github.com/huggingface/transformers/pull/17008#issuecomment-1115007653\r\n\r\nand see if it works well? That's my first thought :-)", "> By the way, `make fixup` seems to adjust content in /examples and /docs in a way that looks mistaken - out of scope for this PR but is that something to be looked at?\r\n> \r\n> e.g.\r\n> \r\n> ```\r\n> \r\n> --- a/docs/source/en/model_doc/bert-generation.mdx\r\n> +++ b/docs/source/en/model_doc/bert-generation.mdx\r\n> @@ -49,7 +49,7 @@ Usage:\r\n> \r\n> >>> input_ids = tokenizer(\r\n> ... \"This is a long article to summarize\", add_special_tokens=False, return_tensors=\"pt\"\r\n> -... ).input_ids\r\n> +>>> ).input_ids\r\n> >>> labels = tokenizer(\"This is a short summary\", return_tensors=\"pt\").input_ids\r\n> \r\n> ```\r\n\r\nOK this was fixed in https://github.com/huggingface/doc-builder/pull/207 :)\r\n\r\n\r\n\r\n\r\n> Could you check this comment\r\n> \r\n> [#17008 (comment)](https://github.com/huggingface/transformers/pull/17008#issuecomment-1115007653)\r\n> \r\n> and see if it works well? That's my first thought :-)\r\n\r\nYes! All done :)" ]
1,650
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? This PR implements tests for MobileBERT. As MobileBERT uses a copy of the BERT Tokenizer, the test inherits from BertTokenizationTest, and also checks the merge & vocab files for these two models are identical. Contributes fixes to issue #16627 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? cc. @LysandreJik @SaulLu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16896/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16896", "html_url": "https://github.com/huggingface/transformers/pull/16896", "diff_url": "https://github.com/huggingface/transformers/pull/16896.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16896.patch", "merged_at": 1652215198000 }
https://api.github.com/repos/huggingface/transformers/issues/16895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16895/comments
https://api.github.com/repos/huggingface/transformers/issues/16895/events
https://github.com/huggingface/transformers/issues/16895
1,212,572,854
I_kwDOCUB6oc5IRmS2
16,895
Datasets: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
{ "login": "loretoparisi", "id": 163333, "node_id": "MDQ6VXNlcjE2MzMzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loretoparisi", "html_url": "https://github.com/loretoparisi", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "repos_url": "https://api.github.com/users/loretoparisi/repos", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @loretoparisi 👋 this seems to be a `datasets` issue, I'd suggest opening an issue there 👉 https://github.com/huggingface/datasets", "Thank you opened!\n\nhttps://github.com/huggingface/datasets/issues/4210" ]
1,650
1,650
1,650
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], features = features ``` ERROR: ``` ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None) Value(dtype='string', id=None) Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39 Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 2/2 [00:18<00:00, 8.06s/it] Downloading data: 100% 391M/391M [00:13<00:00, 35.3MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 36.5MB/s] Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 15 frames /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'cmn' ``` while loading without `features` it loads without errors ``` sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'] ) ``` but the `label` col seems to be wrong (without the `ClassLabel` object): ``` sentences['train'].features {'label': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)} ``` The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences Dataset format is: ``` ces Nechci vědět, co je tam uvnitř. ces Kdo o tom chce slyšet? deu Tom sagte, er fühle sich nicht wohl. ber Mel-iyi-d anida-t tura ? hun Gondom lesz rá rögtön. ber Mel-iyi-d anida-tt tura ? deu Ich will dich nicht reden hören. ``` ### Expected behavior ```shell correctly load train and test files. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16895/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16895/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16894/comments
https://api.github.com/repos/huggingface/transformers/issues/16894/events
https://github.com/huggingface/transformers/pull/16894
1,212,469,956
PR_kwDOCUB6oc42oV6E
16,894
Remove device parameter from create_extended_attention_mask_for_decoder
{ "login": "pbelevich", "id": 1160355, "node_id": "MDQ6VXNlcjExNjAzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1160355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pbelevich", "html_url": "https://github.com/pbelevich", "followers_url": "https://api.github.com/users/pbelevich/followers", "following_url": "https://api.github.com/users/pbelevich/following{/other_user}", "gists_url": "https://api.github.com/users/pbelevich/gists{/gist_id}", "starred_url": "https://api.github.com/users/pbelevich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pbelevich/subscriptions", "organizations_url": "https://api.github.com/users/pbelevich/orgs", "repos_url": "https://api.github.com/users/pbelevich/repos", "events_url": "https://api.github.com/users/pbelevich/events{/privacy}", "received_events_url": "https://api.github.com/users/pbelevich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This seems legit for me, pinging @LysandreJik, @sgugger and @ydshieh to comment on this.\r\n", "LGTM, as it uses the device from the argument `attention_mask`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/5d59df5e880cba38b1f2aa69acb8e5db0d84841f/src/transformers/modeling_utils.py#L592-L594\r\n\r\nThank you for reducing the potential issue!\r\n\r\n(Please wait the approvals from sgugger or LysandreJik before merge 🙏 )", "@sgugger thanks for the code review! all comments have been addressed", "Thanks! Pinging @LysandreJik for final review :-)" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? This RP removes redundant `device` parameter from `create_extended_attention_mask_for_decoder` that may cause potential issues if passed `device` is not equal `attention_mask.device`, see line [`modeling_utils.py#L610`](https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/modeling_utils.py#L610). Explanation: tracing logic from line 610 to method signature: `causal_mask.device` == `attention_mask.device` => `seq_ids.device` == `attention_mask.device` => `device` == `attention_mask.device` @michaelbenayoun <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16894/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16894", "html_url": "https://github.com/huggingface/transformers/pull/16894", "diff_url": "https://github.com/huggingface/transformers/pull/16894.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16894.patch", "merged_at": 1651590371000 }
https://api.github.com/repos/huggingface/transformers/issues/16893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16893/comments
https://api.github.com/repos/huggingface/transformers/issues/16893/events
https://github.com/huggingface/transformers/pull/16893
1,212,434,281
PR_kwDOCUB6oc42oOWZ
16,893
Make create_extended_attention_mask_for_decoder static method
{ "login": "pbelevich", "id": 1160355, "node_id": "MDQ6VXNlcjExNjAzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1160355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pbelevich", "html_url": "https://github.com/pbelevich", "followers_url": "https://api.github.com/users/pbelevich/followers", "following_url": "https://api.github.com/users/pbelevich/following{/other_user}", "gists_url": "https://api.github.com/users/pbelevich/gists{/gist_id}", "starred_url": "https://api.github.com/users/pbelevich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pbelevich/subscriptions", "organizations_url": "https://api.github.com/users/pbelevich/orgs", "repos_url": "https://api.github.com/users/pbelevich/repos", "events_url": "https://api.github.com/users/pbelevich/events{/privacy}", "received_events_url": "https://api.github.com/users/pbelevich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "(just a comment)\r\n\r\nWould it be possible to provide the code sample for the issue that occurs without this PR, or a link to the issue page?", "Looks good to me once all the tests pass.\r\nPinging @sgugger for review!", "> Would it be possible to provide the code sample for the issue that occurs without this PR, or a link to the issue page?\r\n\r\nThe project will be released and the repo will be opened soon", "> Looks good to me once all the tests pass.\r\n\r\n@michaelbenayoun @sgugger all tests passed", "Thanks again for your contribution!" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do?' `create_extended_attention_mask_for_decoder` doesn't access `self` and can be `@staticmethod`. This resolves some issues with fx tracing for pytorch pipeline parallelism project. cc @michaelbenayoun <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16893/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16893", "html_url": "https://github.com/huggingface/transformers/pull/16893", "diff_url": "https://github.com/huggingface/transformers/pull/16893.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16893.patch", "merged_at": 1651244229000 }
https://api.github.com/repos/huggingface/transformers/issues/16892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16892/comments
https://api.github.com/repos/huggingface/transformers/issues/16892/events
https://github.com/huggingface/transformers/pull/16892
1,212,413,861
PR_kwDOCUB6oc42oJ83
16,892
TF: XLA stable softmax
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This looks good to me! Do you think it would be better to change `stable_softmax` to only add the offset if we're running on CPU? It makes very little difference either way, but we could hide the complexity of that inside `stable_softmax` and keep our code paths entirely unchanged on GPU. I'm not certain, though - since it's such a small change maybe we can just do it everywhere.", "> This looks good to me! Do you think it would be better to change `stable_softmax` to only add the offset if we're running on CPU? It makes very little difference either way, but we could hide the complexity of that inside `stable_softmax` and keep our code paths entirely unchanged on GPU. I'm not certain, though - since it's such a small change maybe we can just do it everywhere.\r\n\r\nGood point! Hope this won't affect tests on GPU (at least not for PT/TF equivalence which use `1e-5`). Let's see!", "@Rocketknight1 @ydshieh if you run the test and print the difference between `stable_softmax` and `tf.nn.softmax`, the difference is exactly `0.0` -- I don't think we need to worry about that :D", "@gante With this, do we still have issues regarding sampling in `generate()`. Sorry, I didn't really follow that issue about sampling, but would like to know a bit more 😄 ", "@ydshieh after this fix, the errors related to `generate()` are gone -- they were caused by the forward pass in the models, which in turn were caused by the issue this PR solves", "(I might be completely wrong below)\r\n\r\nI could imagine that we (will) have tests like:\r\n\r\n- testing non-XLA and XLA `generte()` that use sampling\r\n - even with this PR, the differences of output logits between these two might still be as large as, say, `1e-3`?\r\n - if so, the sampling might give different sampling results ..?\r\n - if not, what's the magnitude of the diff we get after this PR?\r\n - testing PT and TF `generte()` that use sampling\r\n - so same potential issue as above ..? \r\n\r\nThanks 🙏 ", "OK, I saw your previous comment\r\n\r\n```\r\nI've spun up an Nvidia T4 ( = no tf32 format) and got an error < 1e-5 for all cases\r\n```", "Based on the testing results, I'm happy for this to be merged now! If this is an XLA bug, though, we should make sure to revert our changes once none of the TF versions we support are affected by it anymore.\r\n\r\nShould we add a TODO to the `masked_softmax` function or a reminder somewhere to make sure that we document why this change is here, and when it can be removed?", "@Rocketknight1 added a TODO with instructions related to when to deprecate 👍 " ]
1,650
1,650
1,650
MEMBER
null
# What does this PR do? As discussed in the thread about XLA problems (https://github.com/huggingface/transformers/issues/16838), this PR adds a stable wrapper for the softmax operation, and replaces `tf.nn.softmax` by the wrapped function. This PR: - Adds the wrapped softmax, named `stable_softmax`, in `tf_utils.py`. Its docstring includes why it is needed and why the new operation is valid; - Adds tests to the wrapped softmax, including XLA tests; - Replaces `tf.nn.softmax` by `stable_softmax` everywhere except in the doctests (I think it overcomplicates the examples, and no XLA should be needed there); - Removes the `skipIf` for XLA tests, as they can now be successfully executed in a CPU. Closes #16838
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16892/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16892", "html_url": "https://github.com/huggingface/transformers/pull/16892", "diff_url": "https://github.com/huggingface/transformers/pull/16892.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16892.patch", "merged_at": 1650913851000 }
https://api.github.com/repos/huggingface/transformers/issues/16891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16891/comments
https://api.github.com/repos/huggingface/transformers/issues/16891/events
https://github.com/huggingface/transformers/pull/16891
1,212,407,947
PR_kwDOCUB6oc42oIrb
16,891
Minor fixes/improvements in `convert_file_size_to_int`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
Fix `convert_file_size_to_int`'s docstring example and the GB str to int conversion (per [this comment](https://github.com/huggingface/datasets/pull/4190#discussion_r856022534) by @lhoestq).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16891/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16891", "html_url": "https://github.com/huggingface/transformers/pull/16891", "diff_url": "https://github.com/huggingface/transformers/pull/16891.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16891.patch", "merged_at": 1650639260000 }
https://api.github.com/repos/huggingface/transformers/issues/16890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16890/comments
https://api.github.com/repos/huggingface/transformers/issues/16890/events
https://github.com/huggingface/transformers/issues/16890
1,212,362,452
I_kwDOCUB6oc5IQy7U
16,890
LED Model returns AlgorithmError when using SageMaker SMP training
{ "login": "kanwari3", "id": 62451944, "node_id": "MDQ6VXNlcjYyNDUxOTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/62451944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kanwari3", "html_url": "https://github.com/kanwari3", "followers_url": "https://api.github.com/users/kanwari3/followers", "following_url": "https://api.github.com/users/kanwari3/following{/other_user}", "gists_url": "https://api.github.com/users/kanwari3/gists{/gist_id}", "starred_url": "https://api.github.com/users/kanwari3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kanwari3/subscriptions", "organizations_url": "https://api.github.com/users/kanwari3/orgs", "repos_url": "https://api.github.com/users/kanwari3/repos", "events_url": "https://api.github.com/users/kanwari3/events{/privacy}", "received_events_url": "https://api.github.com/users/kanwari3/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @philschmid ", "I would also suggest @kanwari3 to\r\n- try to use the same Python/PyTorch/transformers versions (and other libraries) on SageMaker that work locally (if possible)\r\n- if the above doesn't work, try to use on local machine the same versions as those used on SageMaker, and see if you still get errors\r\n\r\nSo we have a better idea about if this is indeed a SageMaker issue or libraries issue", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @philschmid , cc @ydshieh , cc @sgugger \r\nHi, \r\n\r\nThis is a follow up on this post with the same title. We are trying to fix the issue and are still getting the same error after trying out several fixes including matching the python, transformers, and pytorch versions according to the recommendations (3.8, 4.16.2, and 1.10.2, respectively):\r\n\r\n-ValueError: not enough values to unpack (expected 2, got 1)\r\n\r\nThe error is in the “modeling_led” within the transformers module expecting a different input_ids shape. We tried unsqueezing the input_ids and attention_masks but it didn’t fix the error.\r\n\r\nNew Update is we tried below to unsqueeze input tensors to the \"modeling_led\" to solve the above error:\r\ndef unsqueeze_col(example):\r\n return {\"input_ids\": torch.unsqueeze(example[\"input_ids\"], 0)}\r\npubmed_train = pubmed_train.map(unsqueeze_col)\r\n\r\nI’d greatly appreciate your feedback. Please let me know if you need any further information about the project." ]
1,650
1,656
1,653
NONE
null
### System Info ```shell using sagemaker mpi_options = { "enabled" : True, "processes_per_host" : 8 } smp_options = { "enabled":True, "parameters": { "microbatches": 1, "placement_strategy": "spread", "pipeline": "interleaved", "optimize": "memory", "partitions": 2, "ddp": True, } } distribution={ "smdistributed": {"modelparallel": smp_options}, "mpi": mpi_options } hyperparameters={'epochs': 1, 'train_batch_size': 1, 'eval_batch_size': 1, 'model_name':HHousen/distil-led-large-cnn-16384, 'output_dir': 'bucket', 'warmup_steps': 25, 'checkpoint_s3_uri': 'bucket', 'logging_steps':100, 'evaluation_strategy':"steps", 'gradient_accumulation_steps':10 } huggingface_estimator = HuggingFace(entry_point='trainer.py', source_dir='./scripts', instance_type='ml.p3.16xlarge', instance_count=1, role=role, volume=100, transformers_version='4.6.1', pytorch_version='1.8.1', py_version='py36', hyperparameters=hyperparameters, distribution=distribution) ``` ### Who can help? @ydshieh @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Create huggingface estimator 2. training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", per_device_train_batch_size=1, per_device_eval_batch_size=1, fp16=True, fp16_backend="apex", output_dir=s3_bucket, logging_steps=50, warmup_steps=25, gradient_accumulation_steps=10, ) Error I get: [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 68, in trace_forward [1,0]<stderr>: raise e [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 51, in trace_forward [1,0]<stderr>: output = original_forward(self, *args, **kwargs) [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/transformers/models/led/modeling_led.py", line 125, in forward [1,0]<stderr>: return super().forward(positions) [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 68, in trace_forward [1,0]<stderr>: raise e [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/smdistributed/modelparallel/torch/patches/tracing.py", line 51, in trace_forward [1,0]<stderr>: output = original_forward(self, *args, **kwargs) [1,0]<stderr>: File "/opt/conda/lib/python3.6/site-packages/transformers/models/led/modeling_led.py", line 121, in forward [1,0]<stderr>: bsz, seq_len = input_ids_shape[:2] [1,0]<stderr>:ValueError: not enough values to unpack (expected 2, got 1) -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun.real detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[41156,1],0] Exit code: 1 -------------------------------------------------------------------------- ### Expected behavior ```shell Training on a sagemaker notebook p3dn.24xlarge using fairscale `simple` and these versions transformers-4.16.2 torch-1.10.2 fairscale-0.4.5 py37 I can successfully train the LED model with my training data. Trying to get it to work with Huggingface estimator and sagemaker SMP I would assume the same outcome. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16890/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16889/comments
https://api.github.com/repos/huggingface/transformers/issues/16889/events
https://github.com/huggingface/transformers/pull/16889
1,212,057,956
PR_kwDOCUB6oc42nC8k
16,889
[DocTests] Fix some doc tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo in t5.mdx docs and data2vec_vision ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16889", "html_url": "https://github.com/huggingface/transformers/pull/16889", "diff_url": "https://github.com/huggingface/transformers/pull/16889.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16889.patch", "merged_at": 1650696014000 }
https://api.github.com/repos/huggingface/transformers/issues/16888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16888/comments
https://api.github.com/repos/huggingface/transformers/issues/16888/events
https://github.com/huggingface/transformers/issues/16888
1,211,804,426
I_kwDOCUB6oc5IOqsK
16,888
KeyError: loss when pretraining using BertForPreTraining
{ "login": "bobbyhaliwela", "id": 31110143, "node_id": "MDQ6VXNlcjMxMTEwMTQz", "avatar_url": "https://avatars.githubusercontent.com/u/31110143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bobbyhaliwela", "html_url": "https://github.com/bobbyhaliwela", "followers_url": "https://api.github.com/users/bobbyhaliwela/followers", "following_url": "https://api.github.com/users/bobbyhaliwela/following{/other_user}", "gists_url": "https://api.github.com/users/bobbyhaliwela/gists{/gist_id}", "starred_url": "https://api.github.com/users/bobbyhaliwela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bobbyhaliwela/subscriptions", "organizations_url": "https://api.github.com/users/bobbyhaliwela/orgs", "repos_url": "https://api.github.com/users/bobbyhaliwela/repos", "events_url": "https://api.github.com/users/bobbyhaliwela/events{/privacy}", "received_events_url": "https://api.github.com/users/bobbyhaliwela/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to debug your code. In this instance, you are not providing the model with the `next_sentence_label` it needs to compute the loss." ]
1,650
1,650
1,650
NONE
null
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ========================================== My dataset is in `.txt` format, and it look like this: ``` Sentence 1 Sentence 2 Sentence 3 Sentence 4 Sentence A Sentence B Sentence C Sentence D Sentence E Sentence a Sentence b Sentence c ``` ### Reproduction 1. Tokenize my own dataset using WordLevel Tokenizer 2. Do post pre-processing 3. Train the tokenizer 4. Load Dataset using LineByLineTextDataset 5. Define the configuration of BERT model using BertConfig 6. Create BertForPreTraining model 7. Define Data Collator using DataCollatorForLanguageModeling 8. Initialize Trainer 9. Do pre-training ======================================================= Below are the code snippets from step 1 to 9. ### 1) Tokenize Dataset Using WordLevel Tokenizer ``` tokenizer = Tokenizer(WordLevel(unk_token="[UNK]")) trainer = WordLevelTrainer(vocab_size=52_000, min_frequency=1, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer.pre_tokenizer = Whitespace() ``` ### 2) Do Post Pre-Processing ``` tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[ ("[CLS]", 1), ("[SEP]", 2), ], ) ``` ### 3) Train Tokenizer ``` path = [str(x) for x in Path("path_to_text_corpus").glob("**/*.txt")] tokenizer.train(path, trainer) tokenizer.save("path_to_trained_tokenizer.json") ``` ### 4) Load Dataset ``` dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="path_to_corpus.txt", block_size=128, ) ``` ### 5) Define BERT Configuration ``` config = BertConfig( vocab_size=50000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=1, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=3, gradient_checkpointing=False, ) ``` ### 6) Create BERT Model for Pretraining ``` model = BertForPreTraining(config=config) ``` ### 7) Define Data Collator ``` data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15, ) ``` ### 8) Initialize Trainer ``` training_args = TrainingArguments( output_dir="path_to_pretrained_model", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=4, save_steps=10000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) ``` ### 9) Do Pre-Training ``` trainer.train() ``` ### Expected behavior I want to pretrain BERT like model with NSP and MLM, but when i run `trainer.train`, i got this error: ```shell --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <timed eval> in <module> ~/env/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1421 tr_loss_step = self.training_step(model, inputs) 1422 else: -> 1423 tr_loss_step = self.training_step(model, inputs) 1424 1425 if ( ~/env/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs) 2010 2011 with self.autocast_smart_context_manager(): -> 2012 loss = self.compute_loss(model, inputs) 2013 2014 if self.args.n_gpu > 1: ~/env/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 2052 else: 2053 # We don't use .loss here since the model may return tuples instead of ModelOutput. -> 2054 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0] 2055 2056 return (loss, outputs) if return_outputs else loss ~/env/lib/python3.8/site-packages/transformers/utils/generic.py in __getitem__(self, k) 217 if isinstance(k, str): 218 inner_dict = {k: v for (k, v) in self.items()} --> 219 return inner_dict[k] 220 else: 221 return self.to_tuple()[k] KeyError: 'loss' ``` ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16888/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16887/comments
https://api.github.com/repos/huggingface/transformers/issues/16887/events
https://github.com/huggingface/transformers/pull/16887
1,211,713,240
PR_kwDOCUB6oc42l-Of
16,887
added deit onnx config
{ "login": "0xrushi", "id": 6279035, "node_id": "MDQ6VXNlcjYyNzkwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/6279035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0xrushi", "html_url": "https://github.com/0xrushi", "followers_url": "https://api.github.com/users/0xrushi/followers", "following_url": "https://api.github.com/users/0xrushi/following{/other_user}", "gists_url": "https://api.github.com/users/0xrushi/gists{/gist_id}", "starred_url": "https://api.github.com/users/0xrushi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0xrushi/subscriptions", "organizations_url": "https://api.github.com/users/0xrushi/orgs", "repos_url": "https://api.github.com/users/0xrushi/repos", "events_url": "https://api.github.com/users/0xrushi/events{/privacy}", "received_events_url": "https://api.github.com/users/0xrushi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> # What does this PR do?\r\n> Added DeiT OnnxConfig to make this model available for conversion\r\n> \r\n> @ChainYo\r\n\r\n> ## Who can review?\r\n> Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.\r\n\r\n\r\nAlso pinging @LysandreJik, @lewtun and @NielsRogge because he did the implementation of DeiT.\r\n\r\n\r\nBtw Did you try to convert one `DeiT` model with your add ?\r\nIf so you could add the converted model to the [ONNXConfig for all](https://huggingface.co/OWG) organization, it would be awesome!", "Thanks @lewtun , I just added those.\r\n\r\nI'm trying to add a README to [ONNXConfigForAll](https://huggingface.co/OWG),\r\n\r\nHow does onnx work in ViT? \r\nI tried the code below, but its not working\r\n\r\n```\r\nfrom transformers import ViTFeatureExtractor, ViTModel\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom onnxruntime import InferenceSession\r\n\r\n\r\ndataset = load_dataset(\"huggingface/cats-image\")\r\nimage = dataset[\"test\"][\"image\"][0]\r\n\r\nfeature_extractor = ViTFeatureExtractor.from_pretrained(\"google/vit-base-patch16-224-in21k\")\r\nmodel = ViTModel.from_pretrained(\"google/vit-base-patch16-224-in21k\")\r\n\r\ninputs = feature_extractor(image, return_tensors=\"pt\")\r\n\r\nsession = InferenceSession(\"onnx/model.onnx\")\r\n\r\n# ONNX Runtime expects NumPy arrays as input\r\noutputs = session.run(output_names=[\"last_hidden_state\"], input_feed=list(inputs))\r\n```", "> How does onnx work in ViT?\r\n> I tried the code below, but its not working\r\n\r\nCould you check on [netron.app](https://netron.app) what are the inputs of the onnx converted model and then check if the `input_feed` is correct. Btw it should be a `Dict` not a `List`.\r\n\r\n\r\n", "> > How does onnx work in ViT?\r\n> > I tried the code below, but its not working\r\n> \r\n> Could you check on [netron.app](https://netron.app) what are the inputs of the onnx converted model and then check if the `input_feed` is correct. Btw it should be a `Dict` not a `List`.\r\n\r\nAnd inputs itself is a dictionary\r\n![image](https://user-images.githubusercontent.com/6279035/164775874-cfb14338-9893-4a28-b617-69358ff18aa3.png)\r\n\r\n\r\nThe input as seen from netron is pixel_values, I did try dict(inputs), inputs, list(inputs)\r\n\r\n![image](https://user-images.githubusercontent.com/6279035/164775617-7381603a-4675-4ee8-88c6-91ceaff73ab3.png)\r\n", "I just figured it out. You can find it here [https://huggingface.co/OWG/DeiT](https://huggingface.co/OWG/DeiT) :-)", "> I just figured it out. You can find it here https://huggingface.co/OWG/DeiT :-)\r\n\r\nIt seems really good! :hugs: ", "Thanks for the fix!" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? Added DeiT OnnxConfig to make this model available for conversion @ChainYo https://github.com/huggingface/transformers/issues/16308 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16887/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16887/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16887", "html_url": "https://github.com/huggingface/transformers/pull/16887", "diff_url": "https://github.com/huggingface/transformers/pull/16887.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16887.patch", "merged_at": 1650912645000 }
https://api.github.com/repos/huggingface/transformers/issues/16886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16886/comments
https://api.github.com/repos/huggingface/transformers/issues/16886/events
https://github.com/huggingface/transformers/pull/16886
1,211,677,766
PR_kwDOCUB6oc42l2vF
16,886
Allow saved_model export of TFCLIPModel in save_pretrained
{ "login": "seanmor5", "id": 14100120, "node_id": "MDQ6VXNlcjE0MTAwMTIw", "avatar_url": "https://avatars.githubusercontent.com/u/14100120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seanmor5", "html_url": "https://github.com/seanmor5", "followers_url": "https://api.github.com/users/seanmor5/followers", "following_url": "https://api.github.com/users/seanmor5/following{/other_user}", "gists_url": "https://api.github.com/users/seanmor5/gists{/gist_id}", "starred_url": "https://api.github.com/users/seanmor5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seanmor5/subscriptions", "organizations_url": "https://api.github.com/users/seanmor5/orgs", "repos_url": "https://api.github.com/users/seanmor5/repos", "events_url": "https://api.github.com/users/seanmor5/events{/privacy}", "received_events_url": "https://api.github.com/users/seanmor5/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Tagging @ydshieh -- this PR adds several tests", "@gante No problem, glad to help now and if you have plans to improve graph/saved_model serialization in the future I will be glad to help then as well :D", "@seanmor5 Thank you for this PR 🚀 !\r\n\r\n@gante Let me take a look before merge. \r\n\r\nI haven't checked yet, but from the author @seanmor5 's description (regarding `tf.constant` and dynamic shapes), it looks (almost) all the model won't be able to use `saved_model` and in graph model. However, I don't think this is the case as @gante is able to work with XLA.\r\n\r\nTherefore I would like to check a bit more on my side 😄 ", "> I haven't checked yet, but from the author @seanmor5 's description (regarding tf.constant and dynamic shapes), it looks (almost) all the model won't be able to use saved_model and in graph model. However, I don't think this is the case as @gante is able to work with XLA.\r\n\r\nI think we will have to touch a significant number of models for XLA/Saved Model, to be honest 😅 ", "Hi, I could confirm the issue from `tf.constant()` with `symbolic tensor as shape`.\r\n\r\nIf I change the signature of `serving` to fixed shape, it works.\r\n```\r\n @tf.function(\r\n input_signature=[\r\n {\r\n \"input_ids\": tf.TensorSpec((3, 5), tf.int32, name=\"input_ids\"),\r\n \"attention_mask\": tf.TensorSpec((3, 5), tf.int32, name=\"attention_mask\"),\r\n }\r\n ]\r\n )\r\n def serving(self, inputs):\r\n```\r\n\r\nI will check running the model in `tf.function` too.", "Regarding `tf.function`, I am able to make the following code working. It works even with `jit_compile=True`.\r\n@seanmor5 Could you elaborate a bit more your concern regarding `tf.function`?\r\n\r\n\r\n### Code snippet\r\n```python\r\nfrom transformers import TFCLIPTextModel, TFCLIPVisionModel, TFCLIPModel, CLIPConfig\r\nimport os\r\nimport tensorflow as tf\r\n\r\nckpt = \"openai/clip-vit-base-patch32\"\r\nconfig = CLIPConfig.from_pretrained(ckpt)\r\ntext_config = config.text_config\r\n\r\nmodel = TFCLIPTextModel.from_config(text_config)\r\n\r\n\r\ndef get_inputs(batch_size, seq_len):\r\n input_ids = tf.constant(1, shape=[batch_size, seq_len], dtype=tf.int32)\r\n attention_mask = tf.constant(1, shape= [batch_size, seq_len], dtype=tf.int32)\r\n inputs = {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\r\n return inputs\r\n\r\n\r\ninputs_1 = get_inputs(3, 5)\r\ninputs_2 = get_inputs(4, 7)\r\n\r\noutputs = model(**inputs_1)\r\n# print(outputs)\r\n\r\n\r\n@tf.function\r\ndef foo(inputs):\r\n outputs = model(**inputs)\r\n return outputs\r\n\r\noutputs = foo(inputs_1)\r\noutputs = foo(inputs_2)\r\n```", "Sorry for being a bit picky, but I would prefer to get better context in order to decide such changes. In particular, if this issue only occurs during saving to saved_model, I think we can do some more research first to see if there is better solution.", "@ydshieh No problem! You are right, I was assuming there might be issues with `tf.function`, but because the input shapes are static and known at trace time then it makes sense that it works. I think the issue is exclusive to `saved_model` because the input shapes might not be known and so the shape could be symoblic.\r\n\r\nEDIT: This is a failing case for `tf.function` assuming a non-static sequence length. This is probably not really desirable behavior though, because of the limitations of dynamic shapes in XLA. So it's probably okay to ignore, but I'm just pointing out for due diligence :)\r\n\r\n```\r\n@tf.function(\r\n input_signature=[tf.TensorSpec((3, None), dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)],\r\n jit_compile=True,\r\n experimental_relax_shapes=True\r\n)\r\ndef foo(input_ids, attn_mask):\r\n outputs = model(input_ids=input_ids, attention_mask=attn_mask)\r\n return outputs\r\n\r\ninputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),\r\n tf.constant(1, shape=[x, y], dtype=tf.int32))\r\n for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]\r\n\r\nfor inp in inputs:\r\n print(foo(*inp))\r\n```\r\n\r\n\r\nI am open to exploring whatever other options you think might be better", "OK. Maybe doing some more research is good. I will find time to get some more ideas.\r\n\r\nI always feel that these limitations not easy to handle, but so far (my own) use cases could use a fixed shape (other than batch dim).", "For context, we have been suggesting to users to pad and set the second dimension of the shape to the model's maximum sequence length, in this sort of situation. However, it's unoptimized, a manual process, and it doesn't work well in all situations (e.g. in auto-regressive text generation with models like GPT-2, the defined padded input length has to be smaller than the max sequence length, to allow new tokens to come in, but big enough to handle all prompts).", "I understand, and that's what I will do although not the best ideally.\r\n\r\nOne question is: if we use `tf.fill` as suggested by @seanmor5 , are we able to run the failing case provided above.\r\nWe know from the PR that it will work for saved_model, but I would like to verify it also works for the above example.\r\n(I have feeling that it won't work even with `tf.fill`, but need to verify)\r\n", "@ydshieh So I applied the patch with `tf.fill` and the function does run with an `input_signature=[tf.TensorSpec(3, None, dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)`\r\n\r\nOne thing to note is that without the input signature to relax the sequence length constraint, the function is retraced which can be a performance hit. With `tf.fill`, I can verify the following is not retraced with the relaxed input signature:\r\n\r\n```\r\nis_not_retraced = True\r\n\r\n@tf.function(\r\n input_signature=[tf.TensorSpec((3, None), dtype=tf.int32), tf.TensorSpec((3, None), dtype=tf.int32)],\r\n jit_compile=True,\r\n experimental_relax_shapes=True\r\n)\r\ndef foo(input_ids, attn_mask):\r\n global compiles\r\n compiles += 1\r\n outputs = model(input_ids=input_ids, attention_mask=attn_mask)\r\n return outputs\r\n\r\ninputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),\r\n tf.constant(1, shape=[x, y], dtype=tf.int32))\r\n for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]\r\n\r\nprev_concrete_f = foo.get_concrete_function(*inp)\r\n\r\nfor inp in inputs:\r\n concrete_f = foo.get_concrete_function(*inp)\r\n is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f\r\n\r\nassert is_not_retraced\r\n```\r\n\r\nBut this version is retraced without an input signature:\r\n\r\n```\r\nis_not_retraced = True\r\n\r\n@tf.function\r\ndef foo(input_ids, attn_mask):\r\n global compiles\r\n compiles += 1\r\n outputs = model(input_ids=input_ids, attention_mask=attn_mask)\r\n return outputs\r\n\r\ninputs = [(tf.constant(1, shape=[x, y], dtype=tf.int32),\r\n tf.constant(1, shape=[x, y], dtype=tf.int32))\r\n for x, y in zip([3, 3, 3, 3, 3], [1, 2, 3, 4, 5])]\r\n\r\nprev_concrete_f = foo.get_concrete_function(*inp)\r\n\r\nfor inp in inputs:\r\n concrete_f = foo.get_concrete_function(*inp)\r\n is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f\r\n\r\nassert is_not_retraced\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n/var/folders/57/yg31bn915kg_s_tzht3by3r80000gp/T/ipykernel_24110/149582172.py in <module>\r\n 18 is_not_retraced = is_not_retraced and concrete_f is prev_concrete_f\r\n 19 \r\n---> 20 assert is_not_retraced\r\n\r\nAssertionError: \r\n```", "@seanmor5 Thank you for all the effort providing the information. I will take a look (you know a lot about the TF graph thing 🚀 !", "I could confirm all @seanmor5 states, and also find the TF doc [here](https://www.tensorflow.org/api_docs/python/tf/fill)\r\n\r\n<img width=\"741\" alt=\"Screenshot 2022-04-27 231948\" src=\"https://user-images.githubusercontent.com/2521628/165633061-7b73272b-d1ac-43cb-9890-e145a982c645.png\">\r\n\r\nI start to think that's the design. Not sure why TF decides to do so, maybe there are reasons to separate the static constants and dynamic constants (performance consideration?).\r\n\r\nI am in favor to approve. I will take some time to check the added tests, and think about it a bit more in the meantime.", "@ydshieh Thank you! I've had to debug way too much TF code in my life so I've gotten use to it :)\r\n\r\nSo unfortunately the last thing that needs to be addressed is the failing `serving_output` test for the joint `TFCLIPModel`, and I'm not quite sure what the fix might be. Here is the stack trace of the failing test (cc @gante):\r\n\r\n```\r\nE ValueError: in user code:\r\nE\r\nE\r\nE ValueError: Got a non-Tensor value TFBaseModelOutputWithPooling(last_hidden_state=<tf.Tensor 'StatefulPartitionedCall:6' shape=(None, None, 32) dtype=float32>, pooler_output=<tf.Tensor 'StatefulPartitionedCall:7' shape=(None, 32) dtype=float32>, hidden_states=<tf.Tensor 'StatefulPartitionedCall:5' shape=(6, None, None, 32) dtype=float32>, attentions=<tf.Tensor 'StatefulPartitionedCall:4' shape=(5, None, 4, None, None) dtype=float32>) for key 'text_model_output' in the output of the function __inference_serving_217811 used to generate the SavedModel signature 'serving_default'. Outputs for functions used as signatures must be a single Tensor, a sequence of Tensors, or a dictionary from string to Tensor.\r\n```", "@seanmor5 that exception is raised [here](https://github.com/tensorflow/tensorflow/blob/3848851f009efcc742d6dbd0f49510d1f3f78b13/tensorflow/python/saved_model/signature_serialization.py#L218) -- i.e. on the TF serving side, so outside our control. \r\n\r\nIt means we have to change something about our API if we want to support serving for all outputs. I'm calling in for second opinions: @sgugger, as he's more experienced in these situations, and @Rocketknight1, my fellow TF MLE.\r\n\r\n___________________________\r\n\r\n@sgugger @Rocketknight1 some context:\r\n1. This PR attempts to fix serving for TF CLIP which, as you know, has an image and a text component;\r\n2. If we want to output everything (i.e. attention and hidden layers for the vision and the text models), we have the existing `TFCLIPOutput` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_tf_clip.py#L92)), which contains `tf.Tensor` and `TFBaseModelOutputWithPooling` members. Note: contrarily to other `ModelOutput` child classes, `TFCLIPOutput` can contains non-`tf.Tensor` members;\r\n3. Our `serving_output` functions return classes inherited from `ModelOutput`, like `TFBaseModelOutputWithPooling` (e.g. [gpt2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py#L777), [bert](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py#L1127)). We would like to do the same here;\r\n4. TF raises an exception whenever we attempt to serve structures that do not contain tensors, sequence of tensors, or dictionary of tensors (see first link in this comment)... which is the case here, `TFCLIPOutput` does not fit that criteria (see why below);\r\n5. @seanmor5 originally proposed to return `.to_tuple()` ([this one](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_tf_clip.py#L122)) instead, and it works. However, in that case, the API would be different for this model (and across frameworks), and we would lose the ability to access fields by name.\r\n\r\nA few additional notes:\r\n1. Happy to move this to a separate issue, as it may warrant further discussion;\r\n2. Any decision here would set a precedent for multimodal TF models;\r\n3. More specifically, the exception will not be raised if we are serving a structure containing `CompositeTensor` ([code](https://github.com/tensorflow/tensorflow/blob/3848851f009efcc742d6dbd0f49510d1f3f78b13/tensorflow/python/framework/composite_tensor.py#L33)) members. Acording to its docstring, it can expand whatever `tf.nest` can expand, and if the leaves are `tf.Tensor`, we are all good. Looking at the docs of `tf.nest` ([here](https://www.tensorflow.org/api_docs/python/tf/nest)), we can see that it treats `@dataclass`-decorated structures as an atom. So while we can serve most `@dataclass`-decorated `ModelOutput` (its members are `tf.Tensor`), we cannot serve `@dataclass`-decorated `ModelOutput` containing other `@dataclass`-decorated `ModelOutput`, which would likely be our desired case for multimodal models.\r\n\r\nPotential solutions:\r\n1. Consider using `namedtuples`? It has a few [drawbacks](https://peps.python.org/pep-0557/#why-not-just-use-namedtuple)\r\n2. Use dictionaries and, if needed, add syntactic sugar to access fields by name and by index?\r\n3. Expand nested fields -- instead of `TFCLIPOutput` holding two `TFBaseModelOutputWithPooling`, it holds a tuple for each `TFBaseModelOutputWithPooling` or expands their attributes directly into `TFCLIPOutput` (e.g. `text_model_output` -> `text_model_output_attentions`, text_model_output_hidden_states`, ...)\r\n4. ???", "For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?", "I haven't tried TF serving yet (probably just once). For HF models, while `serving_output` returns things like `TFBaseModelOutputWithPooling`, what happens when we use the converted TF serving models? For example, if we call those TF serving models, is the output still be `TFBaseModelOutputWithPooling`? Or it is just a dictionary ..?\r\n\r\n\r\n\r\n", "> For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?\r\n\r\n@sgugger Yes, it is possible, and it should solve the problem (`tuple` is supported by `tf.nest`). The only difference (to PT) is that we wouldn't be able to access the field by name, but that seems like a small price to pay for a simple solution.", "> > For more information, is it possible to convert the nested fields (here `text_model_output` and `vision_model_output` that are `TFBaseModelOutputWithPooling` and not tensors) to tuple inside the `serve` part only? Or would that change need to be done all the time?\r\n> \r\n> @sgugger Yes, it is possible, and it should solve the problem (`tuple` is supported by `tf.nest`). The only difference (to PT) is that we wouldn't be able to access the field by name, but that seems like a small price to pay for a simple solution.\r\n\r\nI **guess** @sgugger doesn't necessarily mean we should use `tuple` instead of `dict`, but just a question about where we should do the conversion (?).\r\n\r\nI would much prefer using dictionary though. Maybe let us check what really gives as outputs when we use HF's TF serving model to make the decision?", "> I haven't tried TF serving yet (probably just once). For HF models, while `serving_output` returns things like `TFBaseModelOutputWithPooling`, what happens when we use the converted TF serving models? For example, if we call those TF serving models, is the output still be `TFBaseModelOutputWithPooling`? Or it is just a dictionary ..?\r\n\r\n@ydshieh Can confirm that the output of a loaded model using `tf.keras.models.load_model` is a `dict`, not a subclass of `ModelOutput`\r\n\r\n(output types of a reloaded `TFCLIPTextModel`, after storing with `tf.keras.models.load_model`)\r\n<img width=\"578\" alt=\"Screenshot 2022-04-28 at 17 15 15\" src=\"https://user-images.githubusercontent.com/12240844/165797819-7fcf28c2-c27f-44d3-896a-90794fec0c03.png\">\r\n\r\n______________________________\r\n\r\nI experimented with converting the problematic variables to multiple formats:\r\n- to `dict` with `dict()` and with `dataclass.asdict()`\r\n- to `tuple` with `tuple()`, `.to_tuple()`\r\n- using `tf.nest.flatten()` as the `CompositeTensor` docstring suggests\r\n\r\nAll of them result in the same exception. I also tried to look into documentation on how to cast into `CompositeTensor`, but there is none we can use (there is an [experimental function](https://www.tensorflow.org/probability/api_docs/python/tfp/experimental/as_composite) in `tensorflow-probability`, which is not our dependency atm, but it throws an exception related to the expected input object).\r\n\r\nThe only thing that seems to work is a flat serving structure, without nested components. \r\n\r\n@seanmor5, I got the exact same exception when attempting your original solution, with `return output.to_tuple()`. The command I ran was `RUN_SLOW=1 py.test -vv tests/clip/test_modeling_tf_clip.py::TFCLIPModelTest::test_saved_model_creation_extended` -- were you also running this command?", "Are those composite outputs really useful for the serving? Can't we just remove them entirely?", "Probably. We can leave them as a TODO and wait for an issue :) @seanmor5 would you be okay with that? (I'm afraid we are hitting a hard problem for a feature we probably don't need)", "Thank you for the work, @gante ! Surprised that `dict` is not working 😢 . Adding to TODO is good for me. Meanwhile, I think the users might customize the format if they want to sever the model. We can even add a comment in the code around `serving_output.`\r\n\r\nCould you share the code you use 🙏 ?", "> Could you share the code you use 🙏 ?\r\n\r\nI was using the code as is in this PR, and `RUN_SLOW=1 py.test -vv tests/clip/test_modeling_tf_clip.py::TFCLIPModelTest::test_saved_model_creation_extended` to debug :)", "@gante Sorry for the confusion, `to_tuple` was not working for me either unfortunately, I had mentioned that in my original message....as you said I think the only solution is with a flat structure. I'm okay with forgoing this part of the PR as a TODO for later. Do you want me to revert back to the original implementation, or add an exception to use one of the individual `TFCLIPTextModel` or `TFCLIPVisionModel` instead? Both of those work fine", "@seanmor5 no worries :) I'd say to revert the changes `serving_output`, and add a TODO pointing to this PR (so our future selves can refer to this discussion) ", "@gante @ydshieh @sgugger I've gone ahead and reverted the change and added in a TODO referencing this PR. Thanks for the feedback/discussions, I will continue to research to see if there are any better workarounds for the issue", "Hi @seanmor5 @gante , I played a bit more, and so far not able to get it work neither.\r\n\r\nDuring the process, I found [Extension types](https://www.tensorflow.org/guide/extension_type), but with it still get errors like\r\n\r\n```\r\nE tensorflow.python.saved_model.nested_structure_coder.NotEncodableError: No encoder for object MaskedTensor.Spec(values=TensorSpec(shape=(2, 3), dtype=tf.int32, name=None), mask=TensorSpec(shape=(2, 3), dtype=tf.bool, name=None)) of type <class 'transformers.models.clip.modeling_tf_clip.MaskedTensor.Spec'>.\r\n```\r\n\r\nIn addition to the experimentation, I got the chance to look a bit the added tests, and found something to correct, like the following block won't work (if we are able to save/load model) anyway, because the outputs don't have these keys\r\n\r\nhttps://github.com/huggingface/transformers/blob/a4abfa6ba3b7875a13538dbc2ddc4eb17dfcca8d/tests/clip/test_modeling_tf_clip.py#L602-L607\r\n\r\nConsidering the `saved_model` for `TFCLIPModel` is not working for now, maybe we can just remove this test (at the final version - once we really don't have a way to make it work).\r\n\r\nAlso, there is `loss` being `None` issue when I tried to save with `saved_model`. I had to manually remove the related lines to do the experimentation. You don't have such issue??\r\n\r\nI will do code review more carefully later.\r\n" ]
1,650
1,651
1,651
CONTRIBUTOR
null
# What does this PR do? I apologize if this is out of scope. There were a few bugs in TFCLIPModel which prevented the model from being exported using the TensorFlow SavedModel format: 1) `_build_causal_attention_mask` makes use of `tf.constant` with a runtime dynamic value. It seems `shape_list` makes use of `tf.shape` which returns a symbolic tensor (inside autograph), which prevents the graph from being fully traced. `tf.constant` does not allow runtime dynamic values, but `tf.fill` does, so I replaced `tf.constant` with a ` tf.cast` and `tf.fill` combo. I don't even think `TFCLIPModel` would run inside a `tf.function` without this change because the autograph trace fails. 2) `TFCLIPTextModel` needs to override the `serving` default implementation. The default implementation expects `token_type_ids` which is not a valid input here. 3) `serving_output` for TFCLIPModel has some issue with tracing through nested dataclasses, which I can't seem to get right just quite yet. Ideally it should be as easy as calling `serving_output` on `text_model_output` and `vision_model_output` (since there is some `convert_to_tensor` stuff going on in each output). I was having problems with TensorFlow saying `TFBaseModelOutputWithPooling` not being a tensor, so I figured the tuple conversion would work, but it doesn't seem to be the fix. I added tests for exporting each of `TFCLIPModel`, `TFTextModel` and `TFVisionModel` as saved models to verify individual components work and that it's the integration of both that's failing cc @LysandreJik for TensorFlow changes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16886", "html_url": "https://github.com/huggingface/transformers/pull/16886", "diff_url": "https://github.com/huggingface/transformers/pull/16886.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16886.patch", "merged_at": 1651675078000 }
https://api.github.com/repos/huggingface/transformers/issues/16885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16885/comments
https://api.github.com/repos/huggingface/transformers/issues/16885/events
https://github.com/huggingface/transformers/issues/16885
1,211,673,727
I_kwDOCUB6oc5IOKx_
16,885
Tensorflow to Onnx change batch and sequence size
{ "login": "nyoungstudios", "id": 16254116, "node_id": "MDQ6VXNlcjE2MjU0MTE2", "avatar_url": "https://avatars.githubusercontent.com/u/16254116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nyoungstudios", "html_url": "https://github.com/nyoungstudios", "followers_url": "https://api.github.com/users/nyoungstudios/followers", "following_url": "https://api.github.com/users/nyoungstudios/following{/other_user}", "gists_url": "https://api.github.com/users/nyoungstudios/gists{/gist_id}", "starred_url": "https://api.github.com/users/nyoungstudios/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nyoungstudios/subscriptions", "organizations_url": "https://api.github.com/users/nyoungstudios/orgs", "repos_url": "https://api.github.com/users/nyoungstudios/repos", "events_url": "https://api.github.com/users/nyoungstudios/events{/privacy}", "received_events_url": "https://api.github.com/users/nyoungstudios/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nyoungstudios , I don't get the goal of this feature because when you convert a model to onnx you want to set the `batch_size` to dynamic and the `sequence_length` needs to fit the model config. This way people can use the model with any `batch_size` and the right model `sequence_length`. \r\n\r\nThe `batch_size=2` and `sequence_length=8` are here for creating a dummy input which is required by ONNX for conversion. It allows onnx to understand the flow though the model layers.\r\n\r\nThe conversion is also automated with the `python -m transformers.onnx --model=distilbert-base-uncased feature=sequence-classification onnx/`.", "@ChainYo maybe I am missing something, but the code sample I have calls the same functions as in the transformers.onnx package cli command. And trying to run an inference without matching the dimensions will return an error like this:\r\n\r\nwith this input\r\n```python\r\nonnx_inputs = tokenizer([\"test string\"],\r\n truncation=True,\r\n return_tensors=\"np\")\r\nonnx_inputs = {k: v.astype(np.int32) for k, v in onnx_inputs.items()}\r\nprint(onnx_inputs)\r\n# {'attention_mask': array([[1, 1, 1, 1]], dtype=int32), 'input_ids': array([[ 101, 3231, 5164, 102]], dtype=int32)}\r\n```\r\nReturns this traceback\r\n```python-traceback\r\nInvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input_ids for the following indices\r\n index: 0 Got: 1 Expected: 2\r\n index: 1 Got: 4 Expected: 8\r\n Please fix either the inputs or the model.\r\n```", "> @ChainYo maybe I am missing something, but the code sample I have calls the same functions as in the transformers.onnx package cli command.\r\n\r\nIt doesn't run the onnx checker function and the validating function to validate that the outputs are the same as the pytorch model.\r\n\r\nOh and another thing I just noticed, you are using a `base` model but with `sequence-classification` task. You need to finetune it or get an already fine-tuned model to be able to use it for sequence-classification.\r\n\r\n", "yes, I did also run this Onnx output model checker\r\n```python\r\nfrom transformers.utils import logging\r\nlogger = logging.get_logger(\"transformers.onnx\") # pylint: disable=invalid-name\r\nlogger.setLevel(logging.INFO)\r\nvalidate_model_outputs(onnx_config, tokenizer, model, onnx_model_path, onnx_output_names, onnx_config.atol_for_validation)\r\n# Validating ONNX model...\r\n#\t-[✓] ONNX model output names match reference model ({'logits'})\r\n#\t- Validating ONNX Model output \"logits\":\r\n#\t\t-[✓] (2, 2) matches (2, 2)\r\n#\t\t-[✓] all values close (atol: 1e-05)\r\n```\r\n\r\nBut that isn't the problem. Also, I am using Tensorflow not PyTorch. And I am just using the base model here as an example. I have fine tuned the model to make predictions, but that doesn't seem to be relevant to the problem here.\r\n\r\nThe traceback shows that it is expecting a batch size of 2 and a sequence size of 8. Here I provided a batch size of 1 and a sequence size of 4.", "> The traceback shows that it is expecting a batch size of 2 and a sequence size of 8. Here I provided a batch size of 1 and a sequence size of 4.\r\n\r\nOh man, I didn't see it was a feature request, I'm so sorry! I though you were facing a bug while converting your model to onnx :weary:. \r\n\r\nThat could be nice to add this yes! Even if doing it via the command line is easier and works like a breeze.", "No problem, I can work on a pr for this in my free time. I did try running this example but with PyTorch and was able to use the Onnx model without a fixed sequence length or batch size. Curious to know if you know if Tensorflow to Onnx supports dynamic sequence length and batch size like PyTorch to Onnx? If not, I think I might try to explore seeing that that is any option before writing code to have Tensorflow to Onnx support different fixed sizes.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,653
1,653
NONE
null
### Feature request Add the ability to set the batch size and sequence length when converting a model to Onnx rather than it always defaulting to a batch size of 2 and sequence length of 8. Here is the example code that I have by importing `OnnxConfig` and setting it's `default_fixed_batch` and `default_fixed_sequence` before exporting the model. ```python import tensorflow as tf from transformers import ( DistilBertTokenizerFast, TFAutoModelForSequenceClassification, ) from transformers.onnx import export, OnnxConfig from transformers.models.distilbert import DistilBertOnnxConfig import onnxruntime import numpy as np from pathlib import Path import tempfile checkpoint = "distilbert-base-uncased" tokenizer = DistilBertTokenizerFast.from_pretrained(checkpoint) # you would load your trained model from the folder path here, but for demo purposes, we will load the transformer checkpoint model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint) onnx_config = DistilBertOnnxConfig(model.config, task="sequence-classification") # current solution for setting the Onnx model's batch size and sequence length OnnxConfig.default_fixed_batch = 1 OnnxConfig.default_fixed_sequence = 128 # create output folder/filename save_folder = tempfile.mkdtemp() onnx_model_path = Path(save_folder).joinpath("model.onnx") # convert to onnx onnx_input_names, onnx_output_names = export(tokenizer, model, onnx_config, onnx_config.default_onnx_opset, onnx_model_path) # predict session = onnxruntime.InferenceSession(str(onnx_model_path), providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']) onnx_inputs = tokenizer(["test string"], max_length=128, truncation=True, padding="max_length", return_tensors="np") onnx_inputs = {k: v.astype(np.int32) for k, v in onnx_inputs.items()} outputs = session.run(None, input_feed=onnx_inputs) ``` ### Motivation To make it easier to set the batch size and sequence length when converting a model to Onnx. Rather than importing `OnnxConfig` and setting it's `default_fixed_batch` and `default_fixed_sequence` before exporting the model, I think the OnnxConfig class could have kwargs for batch size and sequence length. ### Your contribution I could create a PR to change the `OnnxConfig` class signature to something like this: ```python def __init__(self, config: "PretrainedConfig", task: str = "default", patching_specs: List[PatchingSpec] = None, batch_size: int = 2, sequence_length: int = 8): ``` And update the `default_sequence_length` and `default_batch_size` property functions to return the variable set from the `__init__` function
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16885/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16885/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16884/comments
https://api.github.com/repos/huggingface/transformers/issues/16884/events
https://github.com/huggingface/transformers/issues/16884
1,211,567,141
I_kwDOCUB6oc5INwwl
16,884
[tracker] Sharding huge models process and current status
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Both of your commits (https://huggingface.co/bigscience/T0/commit/858cd92e88c9548d194f61259af965d1d1e916b7 and https://huggingface.co/t5-11b/commit/82929bfe90cbfc4e9a3dedf38bb967650ddb6ac2) looks good to me, @stas00.\r\n\r\nOne thing I realize just now is that the `pytorch_model.bin.index.json` index files are LFS-tracked even though they're small JSON files, because there is the `*.bin.*` pattern in .gitattributes (in all repos)\r\nThat's no huge deal though, cc @sgugger @LysandreJik @Pierrci, the only drawback is that we won't get nice diffs on them", "OK, I will wait for a decision on `*.bin.*` as it is being discussed on slack and then adjust accordingly.", "Why do you use the threshold size of >11GB?\r\n\r\nI think we can do this only for >30GB models (30GB is the newly updated Cloudfront file size)", "Because our default shard size is 10GB, and there are quite a few models that are 10.5GB, so no need to bother with those. That's why it's >11GB and not >10GB.\r\n\r\nWe need models to be sharded to smaller chunks not just due to Cloudfront limitations, but primarily because it's very expensive to load these large models cpu memory wise, especially in the DDP situation.\r\n\r\ne.g. if you have 8 gpus and an unsharded model is 30GB you will need at least 480GB of CPU RAM to load it with the normal setup. (`2*30*8`)\r\n\r\nSo here is the breakdown for HF Transformers `from_pretrained` model loading with DDP. The example in each case uses a model of 30GB and 8 DDP processes:\r\n\r\n- non-sharded model: `2 * model size * number of processes`. Example: `2*30*8=480GB`\r\n- non-sharded model + `low_cpu_mem_usage=True`: `model size * number of processes`. Example: `30*8=240GB` (but it's slower)\r\n- sharded model: `(size_of_largest_shard + model size) * number of processes`. Example: `(10+30)*8=320GB`\r\n- sharded model + deepspeed zero 3: `size_of_largest_shard * number of processes`. Example: `10*8=80GB`\r\n\r\nDoes my math make sense?\r\n\r\nWe already have open Issues where users have difficulties loading the models because they don't have an insane amount of CPU memory available.\r\n\r\nNote that even on JeanZay the A100 80GB nodes have *only* 512GB, so it'd be impossible to load huge 60GB+ models on those nodes using HF Transformers models w/o sharding, even though the GPUs are huge. There will be not enough CPU memory to do that.\r\n\r\n\r\n", "so here is what I did to move index files out of LFS:\r\n```\r\ngit lfs untrack '*.bin.*'\r\ngit add --renormalize .\r\ngit commit -am 'Restore file contents that were previously in LFS'\r\n```\r\n\r\ncourtesy of https://stackoverflow.com/a/54119191/9201239", "OK, all 19 models on the list have been sharded and pushed - if there are more let me know.", "Examples of commits restoring index files to non-LFS:\r\n\r\n- https://huggingface.co/bigscience/T0/commit/6a981956e2d0601aff8b9b8caf76bdebdfedff29\r\n- https://huggingface.co/t5-11b/commit/17eca4b4fcdc56f878d0518e928a17fa6d71ab8b\r\n\r\nExample of commit on Eleuther-owned model:\r\n- https://huggingface.co/EleutherAI/gpt-j-6B/commit/d2ea4ce5253728dd5541727d8ef209cf9b48530d", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "sorry to post it here..\r\n\r\nHow could I shard a checkpoint myself and load it?\r\n\r\nWould this work?\r\n```\r\npython -c 'from transformers import AutoModelForSeq2SeqLM; \\\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"save_dir/mT5_finetuned/\"); \\\r\nmodel.save_pretrained(\"save_dir/mT5_finetuned_sharded/\", max_shard_size=\"9GB\")\r\n\r\nmodel2 = AutoModelForSeq2SeqLM.from_pretrained(\"save_dir/mT5_finetuned_sharded/\") # load directly from sharded file folder\r\n\r\n# move all config files to sharded file folder as well\r\n```\r\n\r\n", "that looks mostly correct, @edchengg \r\n\r\njust drop `revision=\"sharded\"` - you'd only use that if you upload the saved sharded model checkpoint to a hub, into a branch called \"sharded\" (instead of \"main\").\r\n\r\nfor the local filesystem, there is no revision.", "> * (size_of_largest_shard + model size) * number of processes\r\n\r\nHi @stas00, thanks so much for this amazing work. Apologies for the following naive question, I'm trying to learn :). You mention `number_of_processes` in your calculation, and I had the curiosity to briefly skim through the `from_pretrained` call https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/modeling_utils.py#L2544, yet I could not see anything to indicate that the loading happens in multiple processes? So I presume `number_of_processes` here refers to multiple processes where a `from_pretrained` call may be made (such as when doing DDP)? ", "Yes, of course, @alexcoca. As you proposed:\r\n\r\nWhen you do DDP you run `n_procs == n_gpus` so each of these processes calls `from_pretrained` and thus each of them needs to load a copy of the model. Hence you need `model_size_in_bytes * n_procs` cpu memory to load a model.\r\n\r\nThe only exception to this is Deepspeed ZeRO stage3 which has a feature called `zero.Init` which immediately shards the model across gpus and frees up the memory on cpu. So it uses much less cpu memory during the loading process.\r\n\r\nSometimes one has a lot of gpu memory but little cpu memory, in that case you could work around the issue by staggering the loading, so that say only one rank loads at a time, then moves the model onto the gpu and frees up the memory for other ranks to load next.", "That's fantastic, thanks so much for your reply. I assume that this feature gets called whenever we use the Deepspeed ZeRO integration (stage 3) for training with the `Trainer`? ", "With HF Trainer it's automatic, but if you want to use it w/ your own Trainer, it's 2 lines of code:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration\r\n", "Hi @stas00 Thank you for this work. Is there a sharded version of [Flan-T5-XL](https://huggingface.co/google/flan-t5-xl)? I see [this](ybelkada/flan-t5-xl-sharded-bf16) but unsure what the original source model was.", "as you can see https://huggingface.co/google/flan-t5-xl is already sharded: https://huggingface.co/google/flan-t5-xl/tree/main\r\n\r\nall new models that are being added are automatically sharded (unless the user overrides `save_pretrained`s defaults)\r\n", "Thank you @stas00. Interestingly, Flan-T5-XL, as is, cannot load into a free Colab T4 GPU (out of RAM), while [this sharded variant does](https://github.com/huggingface/transformers/issues/ybelkada/flan-t5-xl-sharded-bf16).\r\n\r\nComparing https://huggingface.co/google/flan-t5-xl/blob/main/pytorch_model.bin.index.json and https://huggingface.co/ybelkada/flan-t5-xl-sharded-bf16/blob/main/pytorch_model.bin.index.json,\r\n\r\nthe former has\r\n```\r\n\"total_size\": 11925413888\r\n```\r\nwhile the latter has\r\n```\r\n\"total_size\": 5962706944\r\n```\r\n\r\nDoes https://huggingface.co/google/flan-t5-xl/tree/main contain the correct \"official\" model weights from the original Flan-T5 checkpoints? \r\n\r\nAnd could one shard it so that it's loadable on a Colab T4?", "as you can probably see one of them is saved in bf16 and the other in fp32, so the former is half the size of the latter.\r\n\r\n> Does https://huggingface.co/google/flan-t5-xl/tree/main contain the correct \"official\" model weights from the original Flan-T5 checkpoints?\r\n\r\nI'd say open a new issue to discuss the specifics of this model. This issue is not the place to discuss it I think.", "Ok, thanks." ]
1,650
1,685
1,653
CONTRIBUTOR
null
this is an Issue to track which pre-existing huge models (>11GB) need sharding, which have been completed and the code to do that. ### Why shard huge checkpoints? Because it takes much less CPU memory to load a huge model of say 42GB, especially if you're loading concurrently in multiple processes. Here is the breakdown for HF Transformers `from_pretrained` model loading with DDP. The example in each case uses a model of 30GB and 8 DDP processes: - non-sharded model: `2 * model size * number of processes`. Example: `2*30*8=480GB` - non-sharded model + `low_cpu_mem_usage=True`: `model size * number of processes`. Example: `30*8=240GB` (but it's slower) - sharded model: `(size_of_largest_shard + model size) * number of processes`. Example: `(10+30)*8=320GB` - sharded model + deepspeed zero 3: `size_of_largest_shard * number of processes`. Example: `10*8=80GB` ### Using sharded models Here is an example of how to get the 42GB T0 model via multi-part sharded branch (about 9GB per shard here): * Directly: ``` AutoModel.from_pretrained("bigscience/T0", revision="sharded") ``` * Via HF Trainer example scripts: ``` examples/pytorch/translation/run_translation.py \ --model_name_or_path bigscience/T0 --model_revision sharded ... ``` do note that I called these branches "sharded" but other users may call them anything they want, so check the model's available branches on the hub, e.g. Here is the sharded branch of T0 https://huggingface.co/bigscience/T0/tree/sharded And you can further re-shard them to an even smaller shards, e.g. to 5GB shards: ``` python -c 'from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B"); \ model.save_pretrained("t0-sharded", max_shard_size="5GB")' ``` ### Infrastructure decisions - [ ] need to decide how to tell the user about all these different branches. I proposed an automatic extraction in the "Use in Transformers" pop-up, e.g. it could say: ``` Other available branches: sharded, bf16, fp16 ``` ### Sharding progress - [x] bigscience/T0 - [x] bigscience/T0_single_prompt - [x] bigscience/T0p - [x] bigscience/T0pp - [x] t5-11b - [x] google/byt5-xxl - [x] google/mt5-xxl - [x] google/t5-v1_1-xxl - [x] allenai/unifiedqa-t5-11b - [x] allenai/unifiedqa-v2-t5-11b-1363200 - [x] allenai/unifiedqa-v2-t5-11b-1251000 - [x] allenai/macaw-answer-11b - [x] allenai/macaw-11b - [x] EleutherAI/gpt-j-6B - [x] facebook/xglm-7.5B - [x] facebook/incoder-6B - [x] facebook/m2m100-12B-last-ckpt - [x] facebook/m2m100-12B-avg-10-ckpt - [x] facebook/m2m100-12B-avg-5-ckpt XXX: fill in more? ----------------- Here is how each was sharded, `bigscience/T0` here and the rest below. ``` git lfs install git clone https://huggingface.co/bigscience/T0 python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./T0'); \ model.save_pretrained('T0-sharded')" mv T0-sharded/pytorch_model* T0 mv T0-sharded/config.json T0 cd T0 huggingface-cli lfs-enable-largefiles . git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ``` Verified that it downloaded the right version and the example evals just fine: Using `--model_name_or_path bigscience/T0 --model_revision sharded` ``` export BS=1; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed \ --num_gpus=1 examples/pytorch/translation/run_translation.py \ --model_name_or_path bigscience/T0 --model_revision sharded --output_dir \ output_dir --adam_eps 1e-06 --evaluation_strategy=steps --do_eval \ --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \ --logging_steps 500 --max_source_length 128 --max_target_length 128 \ --overwrite_output_dir --per_device_eval_batch_size 1 --predict_with_generate \ --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 \ --dataset_config ro-en --source_prefix 'translate English to Romanian: ' \ --val_max_target_length 128 --warmup_steps 50 --max_eval_samples 50 \ --deepspeed tests/deepspeed/ds_config_zero3.json --fp16 --skip_memory_metrics 0 ``` The rest of the command line instructions follow: <details> <summary>Click to expand!</summary> ``` git clone https://huggingface.co/t5-11b python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./t5-11b'); \ model.save_pretrained('t5-11b-sharded')" mv t5-11b-sharded/pytorch_model* t5-11b mv t5-11b-sharded/config.json t5-11b cd t5-11b huggingface-cli lfs-enable-largefiles . git checkout -b sharded git rm pytorch_model.bin git rm tf_model.h5 git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ git clone https://huggingface.co/bigscience/T0_single_prompt python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./T0_single_prompt'); \ model.save_pretrained('T0_single_prompt-sharded')" mv T0_single_prompt-sharded/pytorch_model* T0_single_prompt mv T0_single_prompt-sharded/config.json T0_single_prompt cd T0_single_prompt huggingface-cli lfs-enable-largefiles . git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ git clone https://huggingface.co/bigscience/T0p python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./T0p'); \ model.save_pretrained('T0p-sharded')" mv T0p-sharded/pytorch_model* T0p mv T0p-sharded/config.json T0p cd T0p huggingface-cli lfs-enable-largefiles . git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ git clone https://huggingface.co/bigscience/T0pp python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./T0pp'); \ model.save_pretrained('T0pp-sharded')" mv T0pp-sharded/pytorch_model* T0pp mv T0pp-sharded/config.json T0pp cd T0pp huggingface-cli lfs-enable-largefiles . git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ``` git clone https://huggingface.co/allenai/unifiedqa-t5-11b git clone https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1363200 git clone https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1251000 git clone https://huggingface.co/allenai/macaw-answer-11b git clone https://huggingface.co/allenai/macaw-11b git clone https://huggingface.co/facebook/xglm-7.5B git clone https://huggingface.co/facebook/incoder-6B git clone https://huggingface.co/facebook/m2m100-12B-last-ckpt git clone https://huggingface.co/facebook/m2m100-12B-avg-10-ckpt git clone https://huggingface.co/facebook/m2m100-12B-avg-5-ckpt ### autogenerate the code for above models ### perl -le '$q=chr(39); print qq[ python -c "from transformers import AutoModelForSeq2SeqLM; \\ model = AutoModelForSeq2SeqLM.from_pretrained($q./$_$q); \\ model.save_pretrained($q$_-sharded$q)" mv $_-sharded/pytorch_model* $_ mv $_-sharded/config.json $_ cd $_ huggingface-cli lfs-enable-largefiles . git lfs untrack '*.bin.*' git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ ] for @ARGV' unifiedqa-t5-11b unifiedqa-v2-t5-11b-1363200 unifiedqa-v2-t5-11b-1251000 macaw-answer-11b macaw-11b xglm-7.5B incoder-6B m2m100-12B-last-ckpt m2m100-12B-avg-10-ckpt m2m100-12B-avg-5-ckpt ------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-t5-11b'); \ model.save_pretrained('unifiedqa-t5-11b-sharded')" mv unifiedqa-t5-11b-sharded/pytorch_model* unifiedqa-t5-11b mv unifiedqa-t5-11b-sharded/config.json unifiedqa-t5-11b cd unifiedqa-t5-11b huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-v2-t5-11b-1363200'); \ model.save_pretrained('unifiedqa-v2-t5-11b-1363200-sharded')" mv unifiedqa-v2-t5-11b-1363200-sharded/pytorch_model* unifiedqa-v2-t5-11b-1363200 mv unifiedqa-v2-t5-11b-1363200-sharded/config.json unifiedqa-v2-t5-11b-1363200 cd unifiedqa-v2-t5-11b-1363200 huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./unifiedqa-v2-t5-11b-1251000'); \ model.save_pretrained('unifiedqa-v2-t5-11b-1251000-sharded')" mv unifiedqa-v2-t5-11b-1251000-sharded/pytorch_model* unifiedqa-v2-t5-11b-1251000 mv unifiedqa-v2-t5-11b-1251000-sharded/config.json unifiedqa-v2-t5-11b-1251000 cd unifiedqa-v2-t5-11b-1251000 huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./macaw-answer-11b'); \ model.save_pretrained('macaw-answer-11b-sharded')" mv macaw-answer-11b-sharded/pytorch_model* macaw-answer-11b mv macaw-answer-11b-sharded/config.json macaw-answer-11b cd macaw-answer-11b huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./macaw-11b'); \ model.save_pretrained('macaw-11b-sharded')" mv macaw-11b-sharded/pytorch_model* macaw-11b mv macaw-11b-sharded/config.json macaw-11b cd macaw-11b huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForCausalLM; \ model = AutoModelForCausalLM.from_pretrained('./xglm-7.5B'); \ model.save_pretrained('xglm-7.5B-sharded')" mv xglm-7.5B-sharded/pytorch_model* xglm-7.5B mv xglm-7.5B-sharded/config.json xglm-7.5B cd xglm-7.5B huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForCausalLM; \ model = AutoModelForCausalLM.from_pretrained('./incoder-6B'); \ model.save_pretrained('incoder-6B-sharded')" mv incoder-6B-sharded/pytorch_model* incoder-6B mv incoder-6B-sharded/config.json incoder-6B cd incoder-6B huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-last-ckpt'); \ model.save_pretrained('m2m100-12B-last-ckpt-sharded')" mv m2m100-12B-last-ckpt-sharded/pytorch_model* m2m100-12B-last-ckpt mv m2m100-12B-last-ckpt-sharded/config.json m2m100-12B-last-ckpt cd m2m100-12B-last-ckpt huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-avg-10-ckpt'); \ model.save_pretrained('m2m100-12B-avg-10-ckpt-sharded')" mv m2m100-12B-avg-10-ckpt-sharded/pytorch_model* m2m100-12B-avg-10-ckpt mv m2m100-12B-avg-10-ckpt-sharded/config.json m2m100-12B-avg-10-ckpt cd m2m100-12B-avg-10-ckpt huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - ------------------ python -c "from transformers import AutoModelForSeq2SeqLM; \ model = AutoModelForSeq2SeqLM.from_pretrained('./m2m100-12B-avg-5-ckpt'); \ model.save_pretrained('m2m100-12B-avg-5-ckpt-sharded')" mv m2m100-12B-avg-5-ckpt-sharded/pytorch_model* m2m100-12B-avg-5-ckpt mv m2m100-12B-avg-5-ckpt-sharded/config.json m2m100-12B-avg-5-ckpt cd m2m100-12B-avg-5-ckpt huggingface-cli lfs-enable-largefiles . git lfs untrack *.bin.* git checkout -b sharded git rm pytorch_model.bin git add pytorch_model* git commit -am "add sharded checkpoint" git push --set-upstream origin sharded cd - </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16884/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16883/comments
https://api.github.com/repos/huggingface/transformers/issues/16883/events
https://github.com/huggingface/transformers/pull/16883
1,211,532,196
PR_kwDOCUB6oc42lZFc
16,883
Integrating R3M Models into Transformers
{ "login": "suraj-nair-1", "id": 12943366, "node_id": "MDQ6VXNlcjEyOTQzMzY2", "avatar_url": "https://avatars.githubusercontent.com/u/12943366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suraj-nair-1", "html_url": "https://github.com/suraj-nair-1", "followers_url": "https://api.github.com/users/suraj-nair-1/followers", "following_url": "https://api.github.com/users/suraj-nair-1/following{/other_user}", "gists_url": "https://api.github.com/users/suraj-nair-1/gists{/gist_id}", "starred_url": "https://api.github.com/users/suraj-nair-1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suraj-nair-1/subscriptions", "organizations_url": "https://api.github.com/users/suraj-nair-1/orgs", "repos_url": "https://api.github.com/users/suraj-nair-1/repos", "events_url": "https://api.github.com/users/suraj-nair-1/events{/privacy}", "received_events_url": "https://api.github.com/users/suraj-nair-1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for your contribution!\r\n\r\n@NielsRogge, could you do a first pass on this model?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,653
1,653
NONE
null
# What does this PR do? This PR integrates the new R3M visual encoder model into transformers. See this [issue](https://github.com/huggingface/transformers/issues/16403). In its current state, the new model/config can be loaded, trained, and pre-trained models located [here](https://huggingface.co/surajnair) can be loaded using `AutoModel.from_pretrained` and behave as expected. However, the model is simply a visual encoder on a ResNet backbone, taking a batch of images passing them through the pre-trained ResNet18,34,50, and returning a batch of embeddings. Therefore most of the testing infrastructure fails (as this is not an NLP model). There are also probably boilerplate versions of the model (ex: CausalLM) which are not applicable. <!-- Remove if not applicable --> Fixes # 16403 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? - still a WIP. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @edbeeching @LysandreJik have been helping me so far.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16883/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16883", "html_url": "https://github.com/huggingface/transformers/pull/16883", "diff_url": "https://github.com/huggingface/transformers/pull/16883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16883.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16882/comments
https://api.github.com/repos/huggingface/transformers/issues/16882/events
https://github.com/huggingface/transformers/pull/16882
1,211,468,787
PR_kwDOCUB6oc42lMeJ
16,882
Documentation: Spanish translation of fast_tokenizers.mdx
{ "login": "jloayza10", "id": 62972713, "node_id": "MDQ6VXNlcjYyOTcyNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/62972713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jloayza10", "html_url": "https://github.com/jloayza10", "followers_url": "https://api.github.com/users/jloayza10/followers", "following_url": "https://api.github.com/users/jloayza10/following{/other_user}", "gists_url": "https://api.github.com/users/jloayza10/gists{/gist_id}", "starred_url": "https://api.github.com/users/jloayza10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jloayza10/subscriptions", "organizations_url": "https://api.github.com/users/jloayza10/orgs", "repos_url": "https://api.github.com/users/jloayza10/repos", "events_url": "https://api.github.com/users/jloayza10/events{/privacy}", "received_events_url": "https://api.github.com/users/jloayza10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you @jloayza10! Could you please add `fast_tokenizers` to [`transformers/docs/source/es/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml)? As a reference, you can use the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide (section \"✍️ Start translating\"). This would allow the tests to pass.", "_The documentation is not available anymore as the PR was closed or merged._", "Hi @omarespejel , I added `fast_tokenizers` to the _toctree.yml file.\r\nThere is still a `check_code_quality` check that is failing, are there further modifications on my part to perform to address this or is this ok?", "Thank you very much @jloayza10! This was great. Please let me know if you would like to translate another file 🤗" ]
1,650
1,652
1,652
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15947 Translation to Spanish of fast_tokenizers.mdx ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #15947 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16882/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16882", "html_url": "https://github.com/huggingface/transformers/pull/16882", "diff_url": "https://github.com/huggingface/transformers/pull/16882.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16882.patch", "merged_at": 1652325944000 }
https://api.github.com/repos/huggingface/transformers/issues/16881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16881/comments
https://api.github.com/repos/huggingface/transformers/issues/16881/events
https://github.com/huggingface/transformers/pull/16881
1,211,414,671
PR_kwDOCUB6oc42lBjY
16,881
Fix PyTorch RAG tests GPU OOM
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Also cc @patil-suraj and @stas00 to see if they have suggestions", "Good for me!", "> What cache is this emptying since the model is not deleted? I think this is a symptom there is a memory leak in the model, which would need to be fixed.\r\n\r\nFrom the documentations [torch.cuda.empty_cache](https://pytorch.org/docs/stable/generated/torch.cuda.empty_cache.html) and [Memory management](https://pytorch.org/docs/stable/notes/cuda.html#memory-management), in particular\r\n\r\n```\r\nPyTorch uses a caching memory allocator to speed up memory allocations.\r\nThis allows fast memory deallocation without device synchronizations.\r\nHowever, the unused memory managed by the allocator will still show as if used in nvidia-smi. \r\n```\r\nand \r\n```\r\nCalling empty_cache() releases all unused cached memory from PyTorch so that those can be used by other GPU applications.\r\n```\r\n\r\nmy understanding is: `PyTorch` will keep the allocated GPU memory for later use in order to avoid to reduce the number of times of memory allocation - the goal is to speed up some (memory) operations.\r\n\r\nThis doesn't mean those GPU memory are leaked - PyTorch still controls them. But for other applications (say `TensorFlow` or `nvidia-smi`), it means those GPU memory are not available. Use `empty_cache()` will release them.\r\n\r\nOf course, the memory occupied by current available tensors won't be release.\r\n", "if you're trying to use multiple programs concurrently accessing the same GPU (regardless if they all are pytorch, or mixed framework), `torch.cuda.empty_cache` is a definite help and is OK to use as long as it's inside the test only and not in `transformers`\r\n\r\nBut why when the pytorch test finishes the torch still has allocated tensors? Why not free them?\r\n\r\nOften `import gc; gc.collect()` is needed to force garbage collection immediately. I'm not sure if this is the case.\r\n\r\n\r\n\r\n", "@stas00 \r\n\r\nMy words in the previous comment might be a bit confusing: we don't have the issue of `the pytorch test finishes the torch still has allocated tensors`. I am just mentioning a general fact (which is quite trivial) that is mentioned in the PyTorch docs I linked.\r\n\r\n`empty_cache()` helps, but not completely. There are still some GPU memory occupied even we leave the testing methods, but still in the same Python process (for example, entering the TF testing module which is launched together with the PT testing).\r\n\r\nThere are some discussions, like this one\r\n\r\nhttps://discuss.pytorch.org/t/pytorch-do-not-clear-gpu-memory-when-return-to-another-function/125944/4", "when you do `torch.ones(1)` it allocates 1-2GB of cuda kernels on the gpu and they remain allocated unless the program is shutdown.\r\n\r\nIn such a case the solution is not to run the program inside `pytest` but to use an external process. Once an external process finishes 100% of gpu memory is returned. (Except the tests are then much slower because it has to launch an external program)\r\n\r\nI created a special framework for running external programs\r\n\r\nhttps://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/testing_utils.py#L1536\r\n\r\nYou can see it extensively used in the deepspeed and extended tests.", "Yeah, I know this approach, but wasn't very sure how not use it in a good way with testing.\r\nMaybe we can discuss this!\r\n\r\nBy the way: `torch.ones(1) it allocates 1-2GB of cuda kernels` --> I tried it and it seems a correct statement.\r\n\r\nI am really surprised (and not very happy) that there is no way to free these kinds of memory allocation. ", "Chances are is that there was no need for that until now and nobody asked for it. If I may propose you could create a feature request at pytorch asking for a feature that releases the cuda env completely. It's very possible that there is a C++ API to do that and it just wasn't made available in python.\r\n\r\nThe use case can be for example this exact situation, where the same program needs to alternate between different frameworks in the same run and needs to be able to access all of gpu's memory.\r\n\r\nDoes tf give the memory fully back when it's done and the process is still running?", "If the goal is to recover as much memory as possible, shouldn't we delete the model before calling the `empty_cache` function?", "There have been some requests in torch GH page without response\r\n\r\nhttps://github.com/pytorch/pytorch/issues/28829 \r\n(this one is on 2019/10)\r\n\r\nSame situation for TF: not fully giving back GPU memory, and the requests are always without response", "> If the goal is to recover as much memory as possible, shouldn't we delete the model before calling the empty_cache function?\r\n\r\n@sgugger You are right! I tried it and just like you said.\r\n\r\nMaybe I can just implement `tearDownModule()` which calls `empty_cache()`, so we don't need to `del models` + `empty_cache()` in all testing methods ..?\r\n\r\nI am going to try this and see if how it goes.\r\n(tried with toy examples, and works as expected)", "Note that the `tearDown` is only called at the end of all tests of the module, so won't do the same thing you implemented (clean up at the end of each test).", "Implement `tearDown` for `unittest.TestCase` subclass (and make sure to call its `super`) - this one will be called at the end of each test.\r\n\r\nand before `empty_cache` it often helps to call `gc.collect()` to make it deterministic.", "OK, I can do that.\r\n\r\nBut I feel that while we are in PyTorch test itself, we don't need to call `empty_cache()` --> because the occupied cache will be managed by `torch` and will be assigned to subsequential torch operations if they require GPU.\r\n\r\nThis `empty_cache()` is mainly for other applications to use `GPU`, like TF for example, in the same process.\r\n\r\nAnd since the TF tests are in other modules, `tearDownModule()` in PT test module should be enough.\r\n\r\nBut again, I can go for `tearDown()`", "Of course, we are discussing this particular test. I wasn't suggesting to do it to all tests.\r\n\r\nThe reason I suggested `gc.collect` before `empty_cache` is because when you free the model it's not guaranteed it'll be immediately freed due to how python's GC works. So if you want a reliable deterministic memory release inside a long running `pytest` process, `gc.collect` followed by `empty_cache` is how you make things deterministic.", "@sgugger @stas00 \r\n\r\nWith the suggestions, all TF RAG tests pass now on GPU! 🔥 Thank you!", "Unrelated to this PR, but since you work a lot with tests (thank you!), in case you're not aware of it, awhile ago I have developed:\r\n\r\nhttps://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/testing_utils.py#L998\r\n\r\nwhich extends `unittest.TestCase` with various handy features - like automatic removal of temp dirs, accessors to file paths and many others. It's extensively documented in the module itself and also in https://huggingface.co/docs/transformers/testing\r\n\r\nYou don't need to do anything about it, other than perhaps I hope it'll save you time in the future.", "Thank you, @stas00 . Maybe I can play with it, and at some point have a discussion with other team members to see if to use it by default!", "And of course please feel free to extend it if there are other features that can be re-used.", "Would like to have @sgugger and/or @LysandreJik opinion before merge :-) ", "Merge now - we should have 13 test failure fewer (if this PR also works well on multiple GPUs too)" ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? Fix PyTorch RAG tests GPU OOM. The GPU OOM ``` E tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,5,16,300,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:GatherV2] ``` could be found in https://github.com/huggingface/transformers/runs/6100697349?check_suite_focus=true ## Results - Without this PR, after the PyTorch RAG test, torch occupies about `9.5 GB` GPU memory. There are 10 TF RAG tests failed. - With this PR, all PT/TF RAG tests pass without GPU OOM
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16881", "html_url": "https://github.com/huggingface/transformers/pull/16881", "diff_url": "https://github.com/huggingface/transformers/pull/16881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16881.patch", "merged_at": 1650900837000 }
https://api.github.com/repos/huggingface/transformers/issues/16880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16880/comments
https://api.github.com/repos/huggingface/transformers/issues/16880/events
https://github.com/huggingface/transformers/pull/16880
1,211,336,420
PR_kwDOCUB6oc42kxCH
16,880
Changes in create_optimizer to support tensor parallelism with SMP
{ "login": "cavdard", "id": 44590949, "node_id": "MDQ6VXNlcjQ0NTkwOTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/44590949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cavdard", "html_url": "https://github.com/cavdard", "followers_url": "https://api.github.com/users/cavdard/followers", "following_url": "https://api.github.com/users/cavdard/following{/other_user}", "gists_url": "https://api.github.com/users/cavdard/gists{/gist_id}", "starred_url": "https://api.github.com/users/cavdard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cavdard/subscriptions", "organizations_url": "https://api.github.com/users/cavdard/orgs", "repos_url": "https://api.github.com/users/cavdard/repos", "events_url": "https://api.github.com/users/cavdard/events{/privacy}", "received_events_url": "https://api.github.com/users/cavdard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
# What does this PR do? - Changes in create_optimizer to support tensor parallelism with SMP. For SMP optimizer is created from wrapped model(SMP model). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16880/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16880", "html_url": "https://github.com/huggingface/transformers/pull/16880", "diff_url": "https://github.com/huggingface/transformers/pull/16880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16880.patch", "merged_at": 1650655479000 }
https://api.github.com/repos/huggingface/transformers/issues/16879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16879/comments
https://api.github.com/repos/huggingface/transformers/issues/16879/events
https://github.com/huggingface/transformers/pull/16879
1,211,237,237
PR_kwDOCUB6oc42kbZ-
16,879
TF: XLA repetition penalty
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thinking about it more, a multiplicative logit penalty really doesn't work, right? Even if we use the reciprocal when the logit is negative, the scale of the penalty depends on the logit's distance from 0. For example, a logit in the range -0.1 to +0.1 will barely be moved by the penalty term, but such logits usually have quite a high probability of being chosen, because most logits are large and negative.", "(merging as the main goal was to port to XLA but, by all means, continue the discussion :) )" ]
1,650
1,650
1,650
MEMBER
null
# What does this PR do? This PR adds our first XLA-compatible TF logit processor, as well as corresponding tests. Since this is the first of a series of small (but similar) PRs, I'd like to request a more thorough review, so the remaining ones are quick. More specifically, this PR makes three changes: 1. Rewrites the TF repetition penalty processor so as to be XLA-compatible; 2. Adds XLA tests for the processor; 3. Since the test mentioned in 2. was a near copy/paste of the non-XLA test, I've decided to split the test into three parts to improve code reuse and reduce errors from ad hoc edits (as the first and last part can be reused in the two versions of the test, XLA and non-XLA) - get inputs - run the processor - check the output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16879", "html_url": "https://github.com/huggingface/transformers/pull/16879", "diff_url": "https://github.com/huggingface/transformers/pull/16879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16879.patch", "merged_at": 1650648572000 }
https://api.github.com/repos/huggingface/transformers/issues/16878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16878/comments
https://api.github.com/repos/huggingface/transformers/issues/16878/events
https://github.com/huggingface/transformers/pull/16878
1,211,190,871
PR_kwDOCUB6oc42kRcN
16,878
Fix doctest list
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? We have `docs/source/en/model_doc/t5v1_1.mdx` in `documentation_tests.txt`, but the file is actually `docs/source/en/model_doc/t5v1.1.mdx`. This cause doctest fail. ``` ERROR: file or directory not found: docs/source/en/model_doc/t5v1_1.mdx ``` (https://github.com/huggingface/transformers/runs/6104280137?check_suite_focus=true)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16878", "html_url": "https://github.com/huggingface/transformers/pull/16878", "diff_url": "https://github.com/huggingface/transformers/pull/16878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16878.patch", "merged_at": 1650557534000 }
https://api.github.com/repos/huggingface/transformers/issues/16877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16877/comments
https://api.github.com/repos/huggingface/transformers/issues/16877/events
https://github.com/huggingface/transformers/issues/16877
1,211,166,345
I_kwDOCUB6oc5IMO6J
16,877
Quick tour_ AutoModel Introduction
{ "login": "Clearloveyuan", "id": 21138840, "node_id": "MDQ6VXNlcjIxMTM4ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/21138840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Clearloveyuan", "html_url": "https://github.com/Clearloveyuan", "followers_url": "https://api.github.com/users/Clearloveyuan/followers", "following_url": "https://api.github.com/users/Clearloveyuan/following{/other_user}", "gists_url": "https://api.github.com/users/Clearloveyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Clearloveyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Clearloveyuan/subscriptions", "organizations_url": "https://api.github.com/users/Clearloveyuan/orgs", "repos_url": "https://api.github.com/users/Clearloveyuan/repos", "events_url": "https://api.github.com/users/Clearloveyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/Clearloveyuan/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,650
1,653
1,653
NONE
null
### System Info ```shell - `transformers` version: 3.4.0 - Platform: Linux-4.15.0-144-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForSequenceClassification model_name = "nlptown/bert-base-multilingual-uncased-sentiment" pt_model = AutoModelForSequenceClassification.from_pretrained(model_name,return_dict=True) pt_outputs = pt_model(**pt_batch) from torch import nn pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) print(pt_predictions) ### Expected behavior ```shell from transformers import AutoModelForSequenceClassification model_name = "nlptown/bert-base-multilingual-uncased-sentiment" pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) pt_outputs = pt_model(**pt_batch) from torch import nn pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) print(pt_predictions) I think pt_outputs.logits is wrong, since from_pretrained(model_name) without return_dict=True. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16876/comments
https://api.github.com/repos/huggingface/transformers/issues/16876/events
https://github.com/huggingface/transformers/pull/16876
1,211,099,165
PR_kwDOCUB6oc42j9zO
16,876
Replace deprecated logger.warn with warning
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
From [Python docs](https://docs.python.org/3/library/logging.html#logging.Logger.warning): _There is an obsolete method `warn` which is functionally identical to `warning`. As `warn` is deprecated, please do not use it - use `warning` instead._
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16876/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16876", "html_url": "https://github.com/huggingface/transformers/pull/16876", "diff_url": "https://github.com/huggingface/transformers/pull/16876.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16876.patch", "merged_at": 1650913971000 }
https://api.github.com/repos/huggingface/transformers/issues/16875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16875/comments
https://api.github.com/repos/huggingface/transformers/issues/16875/events
https://github.com/huggingface/transformers/pull/16875
1,211,053,100
PR_kwDOCUB6oc42j0ID
16,875
[WIP] Add Jukebox model
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Tokenizer and corresponding test should be done. Lacking some detailed description and also probably something about the arguments in the init that are not data but I don't remember if I should create setters (@patrickvonplaten would love to have your review)", "Cool nice to see much progress here! \r\n\r\nFeel free to also add a file that shows how you compare OpenAI's original to the current (HF) implementation", "What happened to the git commit history here?", "I rebased instead of merging 🤕 Will create a new PR to replace that one ", "See followup in #17826 " ]
1,650
1,655
1,655
COLLABORATOR
null
This is a draft pull request. # What does this PR do? This PR will progressively add the [Jukebox](https://openai.com/blog/jukebox/) model to the hub. It is linked to [#16870](https://github.com/huggingface/transformers/issues/16870). # Currently planned steps (WIP) - [x] Create template files with `transformeres-cli add-new-model-like` - [x] `src/transformers/tokenization_jukebox.py` - [x] `src/transformers/test_tokenization_jukebox.py` - [x] `src/transformers/configuration_jukebox.py` - [x] `src/transformers/modeling_jukebox.py` - [ ] `src/transformers/configuration_jukebox.py` - [ ] `docs/source/model_doc/jukebox.rst` - [ ] `src/transformers/tokenization_jukebox_fast.py` (will most probably use WordLevel tokenizer). Also requires to implement a converter function `class JukeboxConverter(Converter):`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16875/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16875/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16875", "html_url": "https://github.com/huggingface/transformers/pull/16875", "diff_url": "https://github.com/huggingface/transformers/pull/16875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16875.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/16874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16874/comments
https://api.github.com/repos/huggingface/transformers/issues/16874/events
https://github.com/huggingface/transformers/pull/16874
1,211,031,586
PR_kwDOCUB6oc42jwYP
16,874
Use ACT2FN to fetch ReLU activation in the T5 model
{ "login": "eldarkurtic", "id": 8884008, "node_id": "MDQ6VXNlcjg4ODQwMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eldarkurtic", "html_url": "https://github.com/eldarkurtic", "followers_url": "https://api.github.com/users/eldarkurtic/followers", "following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}", "gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}", "starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions", "organizations_url": "https://api.github.com/users/eldarkurtic/orgs", "repos_url": "https://api.github.com/users/eldarkurtic/repos", "events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}", "received_events_url": "https://api.github.com/users/eldarkurtic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
CONTRIBUTOR
null
- all activations should be fetched through ACT2FN - it returns ReLU as `nn.Module`, which allows attaching hooks on the activation function and prints it to stdout when `print(model)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16874", "html_url": "https://github.com/huggingface/transformers/pull/16874", "diff_url": "https://github.com/huggingface/transformers/pull/16874.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16874.patch", "merged_at": 1650548009000 }
https://api.github.com/repos/huggingface/transformers/issues/16873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16873/comments
https://api.github.com/repos/huggingface/transformers/issues/16873/events
https://github.com/huggingface/transformers/issues/16873
1,211,020,648
I_kwDOCUB6oc5ILrVo
16,873
The bare ViLT Model not working with CUDA ?
{ "login": "PrithivirajDamodaran", "id": 7071019, "node_id": "MDQ6VXNlcjcwNzEwMTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7071019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PrithivirajDamodaran", "html_url": "https://github.com/PrithivirajDamodaran", "followers_url": "https://api.github.com/users/PrithivirajDamodaran/followers", "following_url": "https://api.github.com/users/PrithivirajDamodaran/following{/other_user}", "gists_url": "https://api.github.com/users/PrithivirajDamodaran/gists{/gist_id}", "starred_url": "https://api.github.com/users/PrithivirajDamodaran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PrithivirajDamodaran/subscriptions", "organizations_url": "https://api.github.com/users/PrithivirajDamodaran/orgs", "repos_url": "https://api.github.com/users/PrithivirajDamodaran/repos", "events_url": "https://api.github.com/users/PrithivirajDamodaran/events{/privacy}", "received_events_url": "https://api.github.com/users/PrithivirajDamodaran/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! You're getting this error because your model is on GPU while your inputs are not. Could you try casting your inputs to GPU?\r\n\r\n```py\r\ninputs = {k:v.to('cuda') for k,v in inputs.items()}\r\n```", "Thanks, I was assuming ViltProcessor will do it for me. Closing" ]
1,650
1,650
1,650
NONE
null
@NielsRogge - Tried in the stock Colab notebook with the latest version of Transformers. Probably the ```to(device)``` is not handled uniformly for Text Embeddings or Image Embeddings? What I am trying to do: I need to extract the raw hidden_state or pooler_ouput for large batches of image + text pairs. Use it as a feature for training my custom model. ```python from transformers import ViltProcessor, ViltModel import torch device = "cuda" if torch.cuda.is_available() else "cpu" vilt_model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm") vilt_processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") vilt_model.to(device) # Added this ? As it is slow in CPU from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "hello world" inputs = vilt_processor(image, text, return_tensors="pt") outputs = vilt_model(**inputs) ``` Throws ``` RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/16872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16872/comments
https://api.github.com/repos/huggingface/transformers/issues/16872/events
https://github.com/huggingface/transformers/pull/16872
1,210,985,228
PR_kwDOCUB6oc42jonS
16,872
Return input_ids in ImageGPT feature extractor
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650
1,650
1,650
COLLABORATOR
null
# What does this PR do? This goes along with the deprecation of `pixel_values` to the profit of `input_ids` in the `ImageGPT` models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16872/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16872", "html_url": "https://github.com/huggingface/transformers/pull/16872", "diff_url": "https://github.com/huggingface/transformers/pull/16872.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16872.patch", "merged_at": 1650546540000 }
https://api.github.com/repos/huggingface/transformers/issues/16871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16871/comments
https://api.github.com/repos/huggingface/transformers/issues/16871/events
https://github.com/huggingface/transformers/pull/16871
1,210,848,034
PR_kwDOCUB6oc42jLb5
16,871
Enabling `imageGPT` auto feature extractor.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger For `ImageGPT`, we don't want to perform padding (otherwise, it fails with `None` as padding value).\r\n\r\nAs you mentioned, `ImageGPTFeatureExactor` outputs `pixel_values` (despite it is not a very good name). So @Narsil 's change\r\n\r\nhttps://github.com/huggingface/transformers/blob/104ee5b3fe1b164d71dec6efd8ddd00a381d13ba/src/transformers/pipelines/base.py#L78-L81\r\n\r\nwill skip padding. (Although the comment could be more specific regarding the exceptional case)", "\r\n> \r\n> will skip padding. (Although the comment could be more specific regarding the exceptional case)\r\n\r\n I added this `# ImageGPT actually will use B, SEQ_LEN not tensor of shape 4` to specify there's an edge case, do you think we could phrase this better ?", "> > will skip padding. (Although the comment could be more specific regarding the exceptional case)\r\n> \r\n> I added this `# ImageGPT actually will use B, SEQ_LEN not tensor of shape 4` to specify there's an edge case, do you think we could phrase this better ?\r\n\r\nThis is good enough for me 😄 ", "Merging #16872 so you can rebase on it and make sure any fix works with that change.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @sgugger ,\r\n\r\nLost track of this one, is it ok to merge ?", "Good for me as long as the tests passing are on a rebased main. Maybe @NielsRogge can have a look too to confirm it's good to go?" ]
1,650
1,653
1,653
CONTRIBUTOR
null
# What does this PR do? Attempts to superseed https://github.com/huggingface/transformers/pull/16869 With less specific overrides Thanks @ydshieh, you definitely couldn't guess about the options I modified here :) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16871/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16871", "html_url": "https://github.com/huggingface/transformers/pull/16871", "diff_url": "https://github.com/huggingface/transformers/pull/16871.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16871.patch", "merged_at": 1653388246000 }