url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/19883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19883/comments
https://api.github.com/repos/huggingface/transformers/issues/19883/events
https://github.com/huggingface/transformers/pull/19883
1,423,266,069
PR_kwDOCUB6oc5BitfF
19,883
Correct README image text
{ "login": "KayleeDavisGitHub", "id": 26849634, "node_id": "MDQ6VXNlcjI2ODQ5NjM0", "avatar_url": "https://avatars.githubusercontent.com/u/26849634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KayleeDavisGitHub", "html_url": "https://github.com/KayleeDavisGitHub", "followers_url": "https://api.github.com/users/KayleeDavisGitHub/followers", "following_url": "https://api.github.com/users/KayleeDavisGitHub/following{/other_user}", "gists_url": "https://api.github.com/users/KayleeDavisGitHub/gists{/gist_id}", "starred_url": "https://api.github.com/users/KayleeDavisGitHub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KayleeDavisGitHub/subscriptions", "organizations_url": "https://api.github.com/users/KayleeDavisGitHub/orgs", "repos_url": "https://api.github.com/users/KayleeDavisGitHub/repos", "events_url": "https://api.github.com/users/KayleeDavisGitHub/events{/privacy}", "received_events_url": "https://api.github.com/users/KayleeDavisGitHub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes README typo involving the location of a cat and remote predictions. It reverses the "left" and "right" references so it is correct when looking at the images. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19883", "html_url": "https://github.com/huggingface/transformers/pull/19883", "diff_url": "https://github.com/huggingface/transformers/pull/19883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19883.patch", "merged_at": 1666787881000 }
https://api.github.com/repos/huggingface/transformers/issues/19882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19882/comments
https://api.github.com/repos/huggingface/transformers/issues/19882/events
https://github.com/huggingface/transformers/issues/19882
1,423,221,762
I_kwDOCUB6oc5U1KQC
19,882
Allow tuples in fast tokenizer
{ "login": "xhluca", "id": 21180505, "node_id": "MDQ6VXNlcjIxMTgwNTA1", "avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xhluca", "html_url": "https://github.com/xhluca", "followers_url": "https://api.github.com/users/xhluca/followers", "following_url": "https://api.github.com/users/xhluca/following{/other_user}", "gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}", "starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xhluca/subscriptions", "organizations_url": "https://api.github.com/users/xhluca/orgs", "repos_url": "https://api.github.com/users/xhluca/repos", "events_url": "https://api.github.com/users/xhluca/events{/privacy}", "received_events_url": "https://api.github.com/users/xhluca/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure if there is a specific reason for this or if it was just a mistake when it was introduced. In any case the PR above should fix it.", "Thank you!" ]
1,666
1,666
1,666
CONTRIBUTOR
null
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.15.0-48-generic-x86_64-with-debian-buster-sid - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Jupyter Notebook ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import transformers as hft tokenizer = hft.AutoTokenizer.from_pretrained('bert-base-uncased') tokenizer # PreTrainedTokenizerFast(name_or_path='bert-base-uncased', vocab_size=30522, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}) tokenizer( ('hello world', 'foo bar') ) ``` Will give this error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~/ipykernel_29616/3046903860.py in <module> 1 tokenizer( ----> 2 ('hello world', 'foo bar') 3 ) /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2510 return_length=return_length, 2511 verbose=verbose, -> 2512 **kwargs, 2513 ) 2514 else: /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2701 return_length=return_length, 2702 verbose=verbose, -> 2703 **kwargs, 2704 ) 2705 /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose) 413 414 if not isinstance(batch_text_or_text_pairs, list): --> 415 raise TypeError(f"batch_text_or_text_pairs has to be a list (got {type(batch_text_or_text_pairs)})") 416 417 # Set the truncation and padding strategy and restore the initial configuration TypeError: batch_text_or_text_pairs has to be a list (got <class 'tuple'>) ``` ### Expected behavior Tuples of str should be supported just like lists of str, as it is the case with non-fast tokenizers. For example: ```python import transformers as hft tokenizer = hft.BertTokenizer.from_pretrained('bert-base-uncased') tokenizer( ('hello world', 'how are you?') ) # {'input_ids': [[101, 7592, 2088, 102], [101, 2129, 2024, 2017, 1029, 102]], 'token_type_ids': [[0, 0, 0, 0], [0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19882/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19881/comments
https://api.github.com/repos/huggingface/transformers/issues/19881/events
https://github.com/huggingface/transformers/pull/19881
1,423,177,511
PR_kwDOCUB6oc5Bia5l
19,881
Add BLOOM resources
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
From #19848, this PR adds resources for BLOOM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19881", "html_url": "https://github.com/huggingface/transformers/pull/19881", "diff_url": "https://github.com/huggingface/transformers/pull/19881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19881.patch", "merged_at": 1666895632000 }
https://api.github.com/repos/huggingface/transformers/issues/19880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19880/comments
https://api.github.com/repos/huggingface/transformers/issues/19880/events
https://github.com/huggingface/transformers/pull/19880
1,423,171,231
PR_kwDOCUB6oc5BiZjn
19,880
Convert None logits processor/stopping criteria to empty list.
{ "login": "ccmaymay", "id": 457238, "node_id": "MDQ6VXNlcjQ1NzIzOA==", "avatar_url": "https://avatars.githubusercontent.com/u/457238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccmaymay", "html_url": "https://github.com/ccmaymay", "followers_url": "https://api.github.com/users/ccmaymay/followers", "following_url": "https://api.github.com/users/ccmaymay/following{/other_user}", "gists_url": "https://api.github.com/users/ccmaymay/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccmaymay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccmaymay/subscriptions", "organizations_url": "https://api.github.com/users/ccmaymay/orgs", "repos_url": "https://api.github.com/users/ccmaymay/repos", "events_url": "https://api.github.com/users/ccmaymay/events{/privacy}", "received_events_url": "https://api.github.com/users/ccmaymay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for the feedback @gante! I've made the requested changes. I was going to add tests, but then I realized that by changing the defaults to `None`, many existing tests implicitly check that `None` is allowable. 😊" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19876 (TypeError from GenerationMixin.generate() when stopping_criteria is None) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante (?) Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19880", "html_url": "https://github.com/huggingface/transformers/pull/19880", "diff_url": "https://github.com/huggingface/transformers/pull/19880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19880.patch", "merged_at": 1666872018000 }
https://api.github.com/repos/huggingface/transformers/issues/19879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19879/comments
https://api.github.com/repos/huggingface/transformers/issues/19879/events
https://github.com/huggingface/transformers/pull/19879
1,423,164,288
PR_kwDOCUB6oc5BiYFe
19,879
Add GPT2 resources
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
From #19848, this PR adds resources for GPT2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19879", "html_url": "https://github.com/huggingface/transformers/pull/19879", "diff_url": "https://github.com/huggingface/transformers/pull/19879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19879.patch", "merged_at": 1666895640000 }
https://api.github.com/repos/huggingface/transformers/issues/19878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19878/comments
https://api.github.com/repos/huggingface/transformers/issues/19878/events
https://github.com/huggingface/transformers/pull/19878
1,423,080,809
PR_kwDOCUB6oc5BiFcD
19,878
Add T5 resources
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
From #19848, this PR adds resources for T5
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19878", "html_url": "https://github.com/huggingface/transformers/pull/19878", "diff_url": "https://github.com/huggingface/transformers/pull/19878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19878.patch", "merged_at": 1666895617000 }
https://api.github.com/repos/huggingface/transformers/issues/19877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19877/comments
https://api.github.com/repos/huggingface/transformers/issues/19877/events
https://github.com/huggingface/transformers/issues/19877
1,422,986,565
I_kwDOCUB6oc5U0Q1F
19,877
Support fairseq encoder-normalize-before in RoBERTa
{ "login": "AndreasMadsen", "id": 505333, "node_id": "MDQ6VXNlcjUwNTMzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/505333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreasMadsen", "html_url": "https://github.com/AndreasMadsen", "followers_url": "https://api.github.com/users/AndreasMadsen/followers", "following_url": "https://api.github.com/users/AndreasMadsen/following{/other_user}", "gists_url": "https://api.github.com/users/AndreasMadsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreasMadsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreasMadsen/subscriptions", "organizations_url": "https://api.github.com/users/AndreasMadsen/orgs", "repos_url": "https://api.github.com/users/AndreasMadsen/repos", "events_url": "https://api.github.com/users/AndreasMadsen/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreasMadsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Transformers is very opinionated in that regard and is not a building block library like fairseq. The only way to add support for those models in Transformers would be to add a new modeling file which adapts the code of RoBERTa to only include the code path corresponding to `--encoder-normalize-before`, as putting both in the same modeling file hurts readability.\r\n\r\nYou can learn more about our philosophy in this regard in [this blog post](https://huggingface.co/blog/transformers-design-philosophy) and if you're interested in making a PR to add this new model, we're looking forward to it!\r\n", "Thanks, I will look at adding the new model. Do you have a policy regarding its name? In this case, the authors did not give it a dedicated name as they see it as just RoBERTa.", "Please see the implementation provided in https://github.com/huggingface/transformers/pull/20305", "That's pretty cool! We had this kind of \"problem\" when adding support for XLM-R XL models:\r\n\r\nhttps://github.com/huggingface/transformers/pull/12082#issue-665786049", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,671
1,671
CONTRIBUTOR
null
### Feature request Support Fairseq's `--encoder-normalize-before` variation of transformer-based models, (particularly RoBERTa, not sure how many it applies to) that "apply layernorm before each encoder block". See: https://fairseq.readthedocs.io/en/v0.7.0/models.html ### Motivation There currently exist unofficial hacks of Huggingface's transformer model at https://github.com/princeton-nlp/dinkytrain which uses `--encoder-normalize-before` with fairseq and then applies a series of hacks to make it work with Huggingface's transformer library. See in particular [their custom version of RoBERTa](https://github.com/princeton-nlp/DinkyTrain/blob/main/huggingface/modeling_roberta_prelayernorm.py) which is a hack of Huggingface's RoBERTa implementation and depends on several internal components. The weights for these hacked models are currently distributed on https://huggingface.co/princeton-nlp but are not actually compatible with the official transformer library. (https://arxiv.org/abs/2202.08005 is the related paper). It would be great to implement feature parity for `--encoder-normalize-before` in Huggingface's transformer such that these hacks can be prevented. Note that I have not affiliation with fairseq, the https://github.com/princeton-nlp/DinkyTrain authors, nor the https://arxiv.org/abs/2202.08005 authors. ### Your contribution I'm happy to contribute a PR for RoBERTa that adds feature parity for the `--encoder-normalize-before` fairseq flag. Note that I have no affiliation with fairseq nor the https://github.com/princeton-nlp/DinkyTrain authors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19876/comments
https://api.github.com/repos/huggingface/transformers/issues/19876/events
https://github.com/huggingface/transformers/issues/19876
1,422,828,810
I_kwDOCUB6oc5UzqUK
19,876
TypeError from GenerationMixin.generate() when stopping_criteria is None
{ "login": "ccmaymay", "id": 457238, "node_id": "MDQ6VXNlcjQ1NzIzOA==", "avatar_url": "https://avatars.githubusercontent.com/u/457238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccmaymay", "html_url": "https://github.com/ccmaymay", "followers_url": "https://api.github.com/users/ccmaymay/followers", "following_url": "https://api.github.com/users/ccmaymay/following{/other_user}", "gists_url": "https://api.github.com/users/ccmaymay/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccmaymay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccmaymay/subscriptions", "organizations_url": "https://api.github.com/users/ccmaymay/orgs", "repos_url": "https://api.github.com/users/ccmaymay/repos", "events_url": "https://api.github.com/users/ccmaymay/events{/privacy}", "received_events_url": "https://api.github.com/users/ccmaymay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "@ccmaymay Agreed with your assessment! I see you're working on a fix, so I'll move further discussion there :)" ]
1,666
1,666
1,666
CONTRIBUTOR
null
### System Info transformers 4.23.1 Anaconda Python 3.9.13 Linux ### Who can help? *(Sorry, I think I botched filling in the template)* I get an error from GenerationMixin.generate() when passing `stopping_criteria=None` explicitly, even though the type is annotated as Optional: ``` Traceback (most recent call last): File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 444, in post_completions return jsonify(make_api_completions(response_id, created, model_id, lm.complete( File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 158, in complete for (i, raw_completion) in enumerate(self._complete( File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 247, in _complete output_token_ids = cast(torch.Tensor, model.generate( File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_c ontext return func(*args, **kwargs) File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 1379, in gen erate stopping_criteria = self._get_stopping_criteria( File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 801, in _get _stopping_criteria criteria = self._merge_criteria_processor_list(criteria, stopping_criteria) File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 809, in _mer ge_criteria_processor_list if len(custom_list) == 0: TypeError: object of type 'NoneType' has no len() ``` The error comes from `_get_stopping_criteria` calling `_merge_criteria_processor_list` with `custom_list=None`: ```python def _get_stopping_criteria( self, max_length: Optional[int], max_time: Optional[float], stopping_criteria: Optional[StoppingCriteriaList] ) -> StoppingCriteriaList: criteria = StoppingCriteriaList() if max_length is not None: criteria.append(MaxLengthCriteria(max_length=max_length)) if max_time is not None: criteria.append(MaxTimeCriteria(max_time=max_time)) criteria = self._merge_criteria_processor_list(criteria, stopping_criteria) return criteria def _merge_criteria_processor_list( self, default_list: Union[LogitsProcessorList, StoppingCriteriaList], custom_list: Union[LogitsProcessorList, StoppingCriteriaList], ) -> Union[LogitsProcessorList, StoppingCriteriaList]: ... ``` @patrickvonplaten ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python def _complete(self, text: str, tokenizer: PreTrainedTokenizer, model: PreTrainedModel, stop_strings: List[str]) -> List[RawCompletion]: input_token_ids = tokenizer(text, return_tensors='pt')['input_ids'] output_token_ids = model.generate( input_token_ids, stopping_criteria=StoppingCriteriaList( SubstringMatchStoppingCriteria(stop_string, text, tokenizer) for stop_string in stop_strings ) if stop_strings else None, ) ``` Incidentally, I wrote this expecting `None` to be a safe default (given the type annotation of `Optional[StoppingCriteriaList]`) and an empty `StoppingCriteriaList` to be more risky (I wasn't sure if StoppingCriteriaList was designed to handle empty lists). I was a little surprised when the opposite was true~ ### Expected behavior `GenerationMixIn.generate()` should behave the same when `stopping_criteria` is `None` or an empty `StoppingCriteriaList` (the current default).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19875/comments
https://api.github.com/repos/huggingface/transformers/issues/19875/events
https://github.com/huggingface/transformers/pull/19875
1,422,812,417
PR_kwDOCUB6oc5BhLsh
19,875
Fix the learning rate in an audio-classification example
{ "login": "regisss", "id": 15324346, "node_id": "MDQ6VXNlcjE1MzI0MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regisss", "html_url": "https://github.com/regisss", "followers_url": "https://api.github.com/users/regisss/followers", "following_url": "https://api.github.com/users/regisss/following{/other_user}", "gists_url": "https://api.github.com/users/regisss/gists{/gist_id}", "starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regisss/subscriptions", "organizations_url": "https://api.github.com/users/regisss/orgs", "repos_url": "https://api.github.com/users/regisss/repos", "events_url": "https://api.github.com/users/regisss/events{/privacy}", "received_events_url": "https://api.github.com/users/regisss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Gently pinging @sgugger for final approval", "Thanks for the fix!" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The learning rate should be `3e-4` in one of the PyTorch audio classification example according to the link to the corresponding run: https://huggingface.co/anton-l/wav2vec2-base-lang-id ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19875", "html_url": "https://github.com/huggingface/transformers/pull/19875", "diff_url": "https://github.com/huggingface/transformers/pull/19875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19875.patch", "merged_at": 1666787754000 }
https://api.github.com/repos/huggingface/transformers/issues/19874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19874/comments
https://api.github.com/repos/huggingface/transformers/issues/19874/events
https://github.com/huggingface/transformers/pull/19874
1,422,769,565
PR_kwDOCUB6oc5BhCdU
19,874
Use self._trial to generate trial_name for Trainer.
{ "login": "reyoung", "id": 728699, "node_id": "MDQ6VXNlcjcyODY5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/728699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reyoung", "html_url": "https://github.com/reyoung", "followers_url": "https://api.github.com/users/reyoung/followers", "following_url": "https://api.github.com/users/reyoung/following{/other_user}", "gists_url": "https://api.github.com/users/reyoung/gists{/gist_id}", "starred_url": "https://api.github.com/users/reyoung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reyoung/subscriptions", "organizations_url": "https://api.github.com/users/reyoung/orgs", "repos_url": "https://api.github.com/users/reyoung/repos", "events_url": "https://api.github.com/users/reyoung/events{/privacy}", "received_events_url": "https://api.github.com/users/reyoung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Please take a review" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Generate trial name unless the trial is not None, and use `(self._trial or trial)` to generate trial name. Because [currently the `optuna` backend give a None trial when using DDP and rank != 0](https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/integrations.py#L193) Related code: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/integrations.py#L160-L208 Or maybe the documentation should be changed. https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/trainer.py#L2318-L2319 Who can review: * Trainer: @sgugger * optuna HPO: @sywangyi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19874", "html_url": "https://github.com/huggingface/transformers/pull/19874", "diff_url": "https://github.com/huggingface/transformers/pull/19874.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19874.patch", "merged_at": 1666961268000 }
https://api.github.com/repos/huggingface/transformers/issues/19873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19873/comments
https://api.github.com/repos/huggingface/transformers/issues/19873/events
https://github.com/huggingface/transformers/issues/19873
1,422,654,219
I_kwDOCUB6oc5Uy_sL
19,873
ByT5Tokenizer ignores spaces around added tokens
{ "login": "djstrong", "id": 1849959, "node_id": "MDQ6VXNlcjE4NDk5NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/1849959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djstrong", "html_url": "https://github.com/djstrong", "followers_url": "https://api.github.com/users/djstrong/followers", "following_url": "https://api.github.com/users/djstrong/following{/other_user}", "gists_url": "https://api.github.com/users/djstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/djstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djstrong/subscriptions", "organizations_url": "https://api.github.com/users/djstrong/orgs", "repos_url": "https://api.github.com/users/djstrong/repos", "events_url": "https://api.github.com/users/djstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/djstrong/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "May also be of interest to @ArthurZucker ", "Also cc @Narsil - any ideas here? ", "> Also cc @Narsil - any ideas here?\r\n\r\nYes, by default added tokens always use `lstrip/rstrip=True` which swallows prefix/suffix spaces (it's a convenience for <special> so you don't have to worry how to add in within some text.)\r\nSince ByT5 is pure bytes, it doesn't have `tokenizers` support (doesn't make sense speedwise). and using the \"slow\" class (it's not slow though).\r\n\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/byt5-base\")\r\n# tokenizer.add_tokens(\"<x>\", special_tokens=True)\r\nnew_token = AddedToken(\"<x>\", lstrip=False, rstrip=False)\r\ntokenizer.add_tokens(new_token, special_tokens=True)\r\ntokenizer._additional_special_tokens.append(new_token)\r\n```\r\n\r\nThis change will fix it, however it require changing internals which is not great. Definitely looks like a bug.\r\n\r\nPinging @ydshieh which was looking at this recently and trying to figure out some tokenizer stuff.\r\n\r\nI \"think\" this qualifies as a bug. (Well the original shared code is not OK, the defaults are to strip left and right, but if you do `add_tokens(AddedToken(.., lstrip=False, rstrip=False))` then it should honor that. And the workaround I had to look at a few different internal variables to set it appropriately so that the `Trie` class could do it's job correctly (otherwise it just couldn't see the `AddedToken` values.", "Sorry for being late here. So as @Narsil pointed out, \r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/byt5-base\")\r\nnew_token = AddedToken(\"<x>\", lstrip=False, rstrip=False)\r\ntokenizer.add_tokens(new_token, special_tokens=True)\r\n```\r\nshould work (which is not the case for now) without the need of `tokenizer._additional_special_tokens.append(new_token)`.\r\nAnd the goal is to make the above code snippet do it job correctly. Is this right?", "Hey! I'll take this one on as part of #23909, since it is an issue with `rstrip` and `lstrip` being ignored (as the default behaviour if a token is not special is to always stip)", "As mentioned, this will take a bit more time, a big refactoring is coming! 🔥 ", "Should be merged this week!" ]
1,666
1,695
1,695
NONE
null
### System Info transformers 4.23.1 ### Who can help? @patrickvonplaten @SaulLu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('google/byt5-base') tokenizer.add_tokens('<x>', special_tokens=True) print(tokenizer('<x> <x> <x><x>')) {'input_ids': [384, 384, 384, 384, 1], 'attention_mask': [1, 1, 1, 1, 1]} ``` in comparison to: ```python print(tokenizer('a a aa')) {'input_ids': [100, 35, 100, 35, 100, 100, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} ``` ### Expected behavior In my task presence of spaces around added tokens are important. Despite that, I think byT5 tokenizer should not ignore any characters (bytes).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19873/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19872/comments
https://api.github.com/repos/huggingface/transformers/issues/19872/events
https://github.com/huggingface/transformers/pull/19872
1,422,379,145
PR_kwDOCUB6oc5Bft-p
19,872
Fix somehow incorrect model - tokenizer mapping in tokenization testing
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? In this method https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/tests/test_tokenization_common.py#L107 we update the mapping whenever we find a tokenizer/model found for a configuration ``` tokenizer[_fast]: (configuration, model) ``` However, **multiple models could share the same tokenizer class**, and, for example, we get `LiLT` (recently added) model for the tokenizer class `LayoutLMv3Tokenizer`. Some tests like the following fails ```bash tests/models/layoutlmv3/test_tokenization_layoutlmv3.py::LayoutLMv3TokenizationTest::test_torch_encode_plus_sent_to_model (line 1130) TypeError: forward() got an unexpected keyword argument 'pixel_values' ``` as the model used for `LayoutLMv3TokenizationTest` is `LiLT` model instead of `LayoutLMv3Model`. 1. This is somehow undesirable, and we would prefer to test the original/canonical mode/tokenizer pair. 2. This PR adds some condition to ensure the desired property in 1. 3. We can probably extend the test to test each possible pair of `(model_1, tokenizer)`, `(model_2, tokenizer)`, ...etc. But I would leave this in another PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19872/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19872", "html_url": "https://github.com/huggingface/transformers/pull/19872", "diff_url": "https://github.com/huggingface/transformers/pull/19872.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19872.patch", "merged_at": 1666706533000 }
https://api.github.com/repos/huggingface/transformers/issues/19871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19871/comments
https://api.github.com/repos/huggingface/transformers/issues/19871/events
https://github.com/huggingface/transformers/pull/19871
1,422,376,653
PR_kwDOCUB6oc5Bftbv
19,871
Generate: contrastive search cosmetic tweaks
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
# What does this PR do? Makes some cosmetic tweaks to contrastive search in advance of the TF PR: - fixes some documentation strings out of place; - correct type hints; - limits comments to 120 chars; - removes redundant variables. Changes validated against the slow tests for contrastive search.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19871", "html_url": "https://github.com/huggingface/transformers/pull/19871", "diff_url": "https://github.com/huggingface/transformers/pull/19871.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19871.patch", "merged_at": 1666705386000 }
https://api.github.com/repos/huggingface/transformers/issues/19870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19870/comments
https://api.github.com/repos/huggingface/transformers/issues/19870/events
https://github.com/huggingface/transformers/pull/19870
1,422,372,161
PR_kwDOCUB6oc5Bfsdb
19,870
No conv bn folding in ipex to avoid warning
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> \r\n\r\nThanks for pointing this out.\r\nActually, IPEX is doing optimizations as much as it could, and convolution-batchnorm folding does have a chance to bring benefit to vision-related models, e.g, beit/resnet/data2vec_vision, etc. \r\nHence if we set it as false, there could be losing some potential optimization chances. \r\nBack for the warning message, it could be improved on the IPEX side. \r\n", "> Thanks for pointing this out. Actually, IPEX is doing optimizations as much as it could, and convolution-batchnorm folding does have a chance to bring benefit to vision-related models, e.g, beit/resnet/data2vec_vision, etc. Hence if we set it as false, there could be losing some potential optimization chances. \r\n\r\nThe philosophy in the huggingface models seems to be to do a lot of input checks, which is incompatible with the tracing used in ipex. I could not find a single model which doesn't fail.\r\nDiscussion on whether or not this tracing could be improved in ipex does not seem to have much traction, see the linked issue. *edit* it actually goes all the way up to pytorch internals, will see if there is traction there.\r\n\r\n> Back for the warning message, it could be improved on the IPEX side.\r\n\r\nThis is certainly true as well. ", "> So let's leave it as `False` for now and revisit once IPEX has better support?\r\n\r\nYes, will record this enhancement as TODO for IPEX." ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Removes attempting convolution-batchnorm folding from ipex optimization, as it always fails and throws a warning. See https://github.com/intel/intel-extension-for-pytorch/issues/250 Most models won't even benefit from attempting, but note that even a model like ResNet fails due to checks like: ``` if num_channels != self.num_channels: ``` ## Who can review? - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19870", "html_url": "https://github.com/huggingface/transformers/pull/19870", "diff_url": "https://github.com/huggingface/transformers/pull/19870.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19870.patch", "merged_at": 1666789133000 }
https://api.github.com/repos/huggingface/transformers/issues/19869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19869/comments
https://api.github.com/repos/huggingface/transformers/issues/19869/events
https://github.com/huggingface/transformers/pull/19869
1,422,266,525
PR_kwDOCUB6oc5BfVvy
19,869
Added translation of serialization.mdx to Portuguese Issue #16824
{ "login": "davialvb", "id": 34287081, "node_id": "MDQ6VXNlcjM0Mjg3MDgx", "avatar_url": "https://avatars.githubusercontent.com/u/34287081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davialvb", "html_url": "https://github.com/davialvb", "followers_url": "https://api.github.com/users/davialvb/followers", "following_url": "https://api.github.com/users/davialvb/following{/other_user}", "gists_url": "https://api.github.com/users/davialvb/gists{/gist_id}", "starred_url": "https://api.github.com/users/davialvb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davialvb/subscriptions", "organizations_url": "https://api.github.com/users/davialvb/orgs", "repos_url": "https://api.github.com/users/davialvb/repos", "events_url": "https://api.github.com/users/davialvb/events{/privacy}", "received_events_url": "https://api.github.com/users/davialvb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16824 Currently, only the serialization.mdx file was translated as of this PR. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19869", "html_url": "https://github.com/huggingface/transformers/pull/19869", "diff_url": "https://github.com/huggingface/transformers/pull/19869.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19869.patch", "merged_at": 1666704868000 }
https://api.github.com/repos/huggingface/transformers/issues/19868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19868/comments
https://api.github.com/repos/huggingface/transformers/issues/19868/events
https://github.com/huggingface/transformers/pull/19868
1,422,223,338
PR_kwDOCUB6oc5BfMbi
19,868
Add Onnx Config for ImageGPT
{ "login": "RaghavPrabhakar66", "id": 52318784, "node_id": "MDQ6VXNlcjUyMzE4Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/52318784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RaghavPrabhakar66", "html_url": "https://github.com/RaghavPrabhakar66", "followers_url": "https://api.github.com/users/RaghavPrabhakar66/followers", "following_url": "https://api.github.com/users/RaghavPrabhakar66/following{/other_user}", "gists_url": "https://api.github.com/users/RaghavPrabhakar66/gists{/gist_id}", "starred_url": "https://api.github.com/users/RaghavPrabhakar66/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RaghavPrabhakar66/subscriptions", "organizations_url": "https://api.github.com/users/RaghavPrabhakar66/orgs", "repos_url": "https://api.github.com/users/RaghavPrabhakar66/repos", "events_url": "https://api.github.com/users/RaghavPrabhakar66/events{/privacy}", "received_events_url": "https://api.github.com/users/RaghavPrabhakar66/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lewtun With latest changes, all the 4 tests are now passing.\r\n\r\n![image](https://user-images.githubusercontent.com/52318784/198269069-0fa809f5-ae25-4b7e-8276-c34d7a941017.png)\r\n", "> @lewtun With the latest changes, all the 4 tests are now passing.\r\n\r\nThanks for iterating so fast, @RaghavPrabhakar66. Good work!\r\n\r\nIf you have time, you can try to upload an ONNX ImageGPT model to the ONNX organization on the hub if you want.", "@ChainYo Sure.", "@sgugger fixed it.", "Thanks!" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes #16308 Add changes to make ImageGPT models available for Onnx conversion. Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ChainYo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19868/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19868", "html_url": "https://github.com/huggingface/transformers/pull/19868", "diff_url": "https://github.com/huggingface/transformers/pull/19868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19868.patch", "merged_at": 1666964393000 }
https://api.github.com/repos/huggingface/transformers/issues/19867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19867/comments
https://api.github.com/repos/huggingface/transformers/issues/19867/events
https://github.com/huggingface/transformers/pull/19867
1,422,201,243
PR_kwDOCUB6oc5BfHnm
19,867
[wip test doc build]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19867). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19867). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19867/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19867", "html_url": "https://github.com/huggingface/transformers/pull/19867", "diff_url": "https://github.com/huggingface/transformers/pull/19867.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19867.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19866/comments
https://api.github.com/repos/huggingface/transformers/issues/19866/events
https://github.com/huggingface/transformers/pull/19866
1,422,108,369
PR_kwDOCUB6oc5Bezkz
19,866
[wip test doc-build]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,666
1,666
1,666
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19866", "html_url": "https://github.com/huggingface/transformers/pull/19866", "diff_url": "https://github.com/huggingface/transformers/pull/19866.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19866.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19865/comments
https://api.github.com/repos/huggingface/transformers/issues/19865/events
https://github.com/huggingface/transformers/issues/19865
1,422,083,233
I_kwDOCUB6oc5Uw0Sh
19,865
Add VATT model
{ "login": "johko", "id": 2843485, "node_id": "MDQ6VXNlcjI4NDM0ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2843485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johko", "html_url": "https://github.com/johko", "followers_url": "https://api.github.com/users/johko/followers", "following_url": "https://api.github.com/users/johko/following{/other_user}", "gists_url": "https://api.github.com/users/johko/gists{/gist_id}", "starred_url": "https://api.github.com/users/johko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johko/subscriptions", "organizations_url": "https://api.github.com/users/johko/orgs", "repos_url": "https://api.github.com/users/johko/repos", "events_url": "https://api.github.com/users/johko/events{/privacy}", "received_events_url": "https://api.github.com/users/johko/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@johko have you started implementing it?", "@fcakyon yes I have started, but progress is still rather slow, as that is my first model contribution and I have to figure out some stuff.\n", "@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)\r\n\r\nAre you working on a TF implementation?", "> @johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)\n> \n> Are you working on a TF implementation?\n\nSorry for the late reply (again 🙈). Yes I'm working on a TF implementation. As the original repo is using it, I'm first doing that and then see about pytorch.", "@johko, thanks for the response! I may also help with the pytorch part once you finalize the TF implementation 👍 ", "@fcakyon that would be great, as my expertise is more in TF 🙂", "Hey @NielsRogge , I'm sorry but I think I have to stop working this for good. I'd love to finish it, but every time I think now I finally have some time to do it, something else comes around :disappointed: \r\n\r\nI think I just can't provide a big contribution like this atm and will rather focus on smaller things. But maybe @fcakyon wants to pick up on it.\r\n\r\nSorry for blocking this so long.", "any news about VATT PyTorch implementation ?" ]
1,666
1,707
null
CONTRIBUTOR
null
### Model description Hey, as discussed with @NielsRogge a few weeks back, I'd like to work on adding the "VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text" model from Google. It is basically three transformers(Video/Audio/Text) that are trained jointly in an unsupervised manner using contrastive loss functions. For downstreams tasks they fine-tune the Transformers separately, but also explore a version that shares the weights for all modalities. For Pre-Traning they use text-video-audio triplets from HowTo100M and video-audio pairs from AudioSet. The authors describe how to fine-tune VATT for vision and audio classification tasks and provide weights for the fine-tuned versions. The backbone for vision is ViT, for audio WaveFormTransformer and for text they are using BERT/T5 ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/pdf/2104.11178.pdf GitHub: https://github.com/google-research/google-research/tree/master/vatt
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19865/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19865/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/19864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19864/comments
https://api.github.com/repos/huggingface/transformers/issues/19864/events
https://github.com/huggingface/transformers/issues/19864
1,422,076,310
I_kwDOCUB6oc5UwymW
19,864
Whisper's Tokenizer encoding function is not user-friendly
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }, { "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false } ]
[ "@ArthurZucker @sanchit-gandhi could you take a look here? ", "Agree with all of the above **other** than the attention mask: the Whisper tokeniser **should** return an attention mask. This attention mask is not used to mask hidden-states (as is done with the Whisper feature-extractor), but rather mask padded token ids in the computation of the C.E. loss. We require the attention mask to inform the system where the padded tokens reside, and thus where to ignore terms in the loss computation." ]
1,666
1,667
1,667
MEMBER
null
### System Info - `transformers` version: 4.23.0.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu) - Jax version: 0.3.14 - JaxLib version: 0.3.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Text encoded by Whisper should by default have a EOS token in the end (just like any sequence-to-sequence transformer model) and also include the correct other tokens if necessary. Also it should **not** return the `attention_mask` as Whisper never uses an `attention_mask`. **Note**: Encoding of text tokens is only needed for fine-tuning the model, not for inference, so this bug/feature request is not relevant for inference. Currently when doing: ```python from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small") tokenizer("hey") ``` One gets: ``` {'input_ids': [17230], 'attention_mask': [1]} ``` There are multiple problems with this: - The EOS token is not appended, but whisper **always** needs an EOS for training - There should be the following config parameters for the whisper tokenizer: - Set a language id so that the tokenizer automatically adds the lang id prefix - Set a "use_timestamps" true/false flag to the tokenizer that decides whether the "notimesteps" token should be added or not - Remove this line: https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/whisper/tokenization_whisper.py#L119 as this is not relevant for whisper and a copy-paste from GPT2 -> whisper should never set the EOS token as the BOS token IMO (@sanchit-gandhi we did this for our paper, but this was more a lucky bug than a solid case) - Whisper should **not** return the `attention_mask` => Let's try to make whisper as user-friendly as possible for fine-tuning. Say I'd like to fine-tune on a multi-lingual language If I remember correctly the format for the **labels** to fine-tune whisper on multi-lingual is: ``` <lang-id><|notimestamps|> ... text ... <eos> ``` @ArthurZucker or is it first `<|notimestamps|>` and then `<lang-id>` and then <eos> ? The `decoder_input_ids` should then just add `<|startoftranscript>` and remove `<eos>` which happens automatically in the shift function. Now I should be able to get this encoding behavior by default when doing: ``` from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", lang_id="dv", predict_timestamps=False) tokenizer("hey") ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", lang_id="dv", predict_timestamps=False) tokenizer("hey") ### Expected behavior ``` <lang-id><|notimestamps|> ... text ... <eos> ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19864/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19863/comments
https://api.github.com/repos/huggingface/transformers/issues/19863/events
https://github.com/huggingface/transformers/pull/19863
1,422,041,211
PR_kwDOCUB6oc5Belbb
19,863
Fix doctest for `GenerationMixin.contrastive_search`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Just update the expected value to use `'` instead of `"`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19863/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19863", "html_url": "https://github.com/huggingface/transformers/pull/19863", "diff_url": "https://github.com/huggingface/transformers/pull/19863.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19863.patch", "merged_at": 1666702277000 }
https://api.github.com/repos/huggingface/transformers/issues/19862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19862/comments
https://api.github.com/repos/huggingface/transformers/issues/19862/events
https://github.com/huggingface/transformers/issues/19862
1,422,010,746
I_kwDOCUB6oc5Uwil6
19,862
Trainer RuntimeError CUDA error
{ "login": "devozs", "id": 25392271, "node_id": "MDQ6VXNlcjI1MzkyMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/25392271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devozs", "html_url": "https://github.com/devozs", "followers_url": "https://api.github.com/users/devozs/followers", "following_url": "https://api.github.com/users/devozs/following{/other_user}", "gists_url": "https://api.github.com/users/devozs/gists{/gist_id}", "starred_url": "https://api.github.com/users/devozs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devozs/subscriptions", "organizations_url": "https://api.github.com/users/devozs/orgs", "repos_url": "https://api.github.com/users/devozs/repos", "events_url": "https://api.github.com/users/devozs/events{/privacy}", "received_events_url": "https://api.github.com/users/devozs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks linked to your particular setup. Can you add a print of `args.device` in the script you are running and copy-paste the result of `transformers-cli env` (as was requested in the template)?", "thanks for the prompt reply and sorry for missing the `transformers-cli env`\r\n\r\n`args.device` cuda:0\r\n\r\nEnv:\r\n- transformers version: 4.23.1\r\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35\r\n- Python version: 3.8.15\r\n- Huggingface_hub version: 0.10.1\r\n- PyTorch version (GPU?): 1.12.1+cu116 (True)\r\n- Tensorflow version (GPU?): 2.10.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.6.1 (cpu)\r\n- Jax version: 0.3.23\r\n- JaxLib version: 0.3.22\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n(i`ve also tried with other PyTorch & transformers versions)", "i hope its ok that i putting a link to different ML library (in case its not - i`ll delete it)\r\n[this issue seems to be similar](https://github.com/Lightning-AI/lightning/issues/11818#issuecomment-1033541994)", "It doesn't look exactly similar in the sense that it is in an environment without a GPU, whereas yours shows one. Unless you are not executing the script within the exact same env as the results of the commands passed above of course." ]
1,666
1,667
1,667
NONE
null
### System Info ### Versions i`ve tried using - transformers==4.15.0, 4.8.0 and latest - python 3.8, 3.9 and 3.10 ### nvidia-smi NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 Thanks! ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ### Actual Error: Class: `transformers/trainer.py` Failing at: `tr_loss = torch.tensor(0.0).to(args.device)` Returns: `RuntimeError: CUDA error: invalid argument` ### Code I am following several examples and getting the same above error and the same line. As a reference you can reproduce it according to this [code sample](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo.py) The exact same error occur with other huggingface training examples. ### Also tried - if i am running the line `tr_loss = torch.tensor(0.0).to(args.device)` as a standalone its working fine - Also tried to run this line as part of gpt_neo.py in the above example and it worked fine but later failed as part of `transformers/trainer.py` - I made sure the CUDA is running fine: `torch.cuda.is_available()` - Running only `torch.tensor(0.0)` works fine, only when adding `.to(device)` its failing ### Expected behavior No errors at `torch.tensor(0.0).to(device)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19862/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19861/comments
https://api.github.com/repos/huggingface/transformers/issues/19861/events
https://github.com/huggingface/transformers/issues/19861
1,422,006,454
I_kwDOCUB6oc5Uwhi2
19,861
Finetuning transformers for long document summarisation
{ "login": "ra-MANUJ-an", "id": 58105811, "node_id": "MDQ6VXNlcjU4MTA1ODEx", "avatar_url": "https://avatars.githubusercontent.com/u/58105811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ra-MANUJ-an", "html_url": "https://github.com/ra-MANUJ-an", "followers_url": "https://api.github.com/users/ra-MANUJ-an/followers", "following_url": "https://api.github.com/users/ra-MANUJ-an/following{/other_user}", "gists_url": "https://api.github.com/users/ra-MANUJ-an/gists{/gist_id}", "starred_url": "https://api.github.com/users/ra-MANUJ-an/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ra-MANUJ-an/subscriptions", "organizations_url": "https://api.github.com/users/ra-MANUJ-an/orgs", "repos_url": "https://api.github.com/users/ra-MANUJ-an/repos", "events_url": "https://api.github.com/users/ra-MANUJ-an/events{/privacy}", "received_events_url": "https://api.github.com/users/ra-MANUJ-an/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patil-suraj \r\n@patrickvonplaten ", "Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep the issues for bugs and feature requests only.", "Sure, and apologies. I'll ask on the forum now. Thanks for the information." ]
1,666
1,666
1,666
NONE
null
I'm wondering if there are any sample codes or blogs which can help me understand finetuning of transformer models for long document summarisation?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19861/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19860/comments
https://api.github.com/repos/huggingface/transformers/issues/19860/events
https://github.com/huggingface/transformers/issues/19860
1,421,996,419
I_kwDOCUB6oc5UwfGD
19,860
Can run_translation.py support nllb model fine-tuning ?
{ "login": "cokuehuang", "id": 29472378, "node_id": "MDQ6VXNlcjI5NDcyMzc4", "avatar_url": "https://avatars.githubusercontent.com/u/29472378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cokuehuang", "html_url": "https://github.com/cokuehuang", "followers_url": "https://api.github.com/users/cokuehuang/followers", "following_url": "https://api.github.com/users/cokuehuang/following{/other_user}", "gists_url": "https://api.github.com/users/cokuehuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/cokuehuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cokuehuang/subscriptions", "organizations_url": "https://api.github.com/users/cokuehuang/orgs", "repos_url": "https://api.github.com/users/cokuehuang/repos", "events_url": "https://api.github.com/users/cokuehuang/events{/privacy}", "received_events_url": "https://api.github.com/users/cokuehuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The script support NLLB models, if you run any issue you can use the [forums](https://discuss.huggingface.co/) to ask the community for help." ]
1,666
1,666
1,666
NONE
null
### Feature request Can run_translation.py support nllb model fine-tuning ? As run_translation.py is much easier to fine-tuning a model. ### Motivation Want to an easy way to fine-tuning nllb model, as it is so difficult to fine-tuning a nllb model from its docs. ### Your contribution star the repository
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19859/comments
https://api.github.com/repos/huggingface/transformers/issues/19859/events
https://github.com/huggingface/transformers/pull/19859
1,421,994,431
PR_kwDOCUB6oc5BebhV
19,859
[WIP] Add type hints to layoutlmv2 model
{ "login": "gokul-the-dev", "id": 104618719, "node_id": "U_kgDOBjxa3w", "avatar_url": "https://avatars.githubusercontent.com/u/104618719?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gokul-the-dev", "html_url": "https://github.com/gokul-the-dev", "followers_url": "https://api.github.com/users/gokul-the-dev/followers", "following_url": "https://api.github.com/users/gokul-the-dev/following{/other_user}", "gists_url": "https://api.github.com/users/gokul-the-dev/gists{/gist_id}", "starred_url": "https://api.github.com/users/gokul-the-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gokul-the-dev/subscriptions", "organizations_url": "https://api.github.com/users/gokul-the-dev/orgs", "repos_url": "https://api.github.com/users/gokul-the-dev/repos", "events_url": "https://api.github.com/users/gokul-the-dev/events{/privacy}", "received_events_url": "https://api.github.com/users/gokul-the-dev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19859). All of your documentation changes will be reflected on that endpoint.", "I think you're annotating a bunch of TF types into a Torch modeling file here! LayoutLMv2 does not have a TF port (v1 and v3 do, though)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19859", "html_url": "https://github.com/huggingface/transformers/pull/19859", "diff_url": "https://github.com/huggingface/transformers/pull/19859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19859.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19858/comments
https://api.github.com/repos/huggingface/transformers/issues/19858/events
https://github.com/huggingface/transformers/pull/19858
1,421,951,312
PR_kwDOCUB6oc5BeSuh
19,858
Add type hints to TFPegasusModel
{ "login": "EdAbati", "id": 29585319, "node_id": "MDQ6VXNlcjI5NTg1MzE5", "avatar_url": "https://avatars.githubusercontent.com/u/29585319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EdAbati", "html_url": "https://github.com/EdAbati", "followers_url": "https://api.github.com/users/EdAbati/followers", "following_url": "https://api.github.com/users/EdAbati/following{/other_user}", "gists_url": "https://api.github.com/users/EdAbati/gists{/gist_id}", "starred_url": "https://api.github.com/users/EdAbati/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EdAbati/subscriptions", "organizations_url": "https://api.github.com/users/EdAbati/orgs", "repos_url": "https://api.github.com/users/EdAbati/repos", "events_url": "https://api.github.com/users/EdAbati/events{/privacy}", "received_events_url": "https://api.github.com/users/EdAbati/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks perfect now. Thank you!" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This adds type annotations to `TFPegasusModel` and `TFPegasusForConditionalGeneration` as part of #16059 ## Who can review? @Rocketknight1 Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19858", "html_url": "https://github.com/huggingface/transformers/pull/19858", "diff_url": "https://github.com/huggingface/transformers/pull/19858.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19858.patch", "merged_at": 1666881838000 }
https://api.github.com/repos/huggingface/transformers/issues/19857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19857/comments
https://api.github.com/repos/huggingface/transformers/issues/19857/events
https://github.com/huggingface/transformers/issues/19857
1,421,849,877
I_kwDOCUB6oc5Uv7UV
19,857
Import Error: cannot import name 'TFBertTokenizer' from 'transformers'
{ "login": "shabha7092", "id": 19155346, "node_id": "MDQ6VXNlcjE5MTU1MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/19155346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shabha7092", "html_url": "https://github.com/shabha7092", "followers_url": "https://api.github.com/users/shabha7092/followers", "following_url": "https://api.github.com/users/shabha7092/following{/other_user}", "gists_url": "https://api.github.com/users/shabha7092/gists{/gist_id}", "starred_url": "https://api.github.com/users/shabha7092/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shabha7092/subscriptions", "organizations_url": "https://api.github.com/users/shabha7092/orgs", "repos_url": "https://api.github.com/users/shabha7092/repos", "events_url": "https://api.github.com/users/shabha7092/events{/privacy}", "received_events_url": "https://api.github.com/users/shabha7092/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`TFBertTokenizer` is in the main init but it's a fairly recent addition. Are you certain you have the latest version of Transformers? You can print it with `from transformers import __version__; print(__version__)`", "@sgugger For some reason transformer version was showing 4.20.1 when i ran the above command. I updated it to 4.23.1 and now i dont see any error. Thank you fo your time. ", "how u are update the version?????", "@Amlalqhtani pip install --upgrade transformers" ]
1,666
1,683
1,666
NONE
null
### System Info Platform - Mac os Python Version - 3.9.0 Tensorflow version - 2.10.0 Transformers version - 4.23.1 (Tried different version) ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction when i try to import TFBertTokenizer using the statement “from transformers import TFBertTokenizer” i come across the below error. ImportError: cannot import name ‘TFBertTokenizer’ from ‘transformers’ I am able to import BertTokenizer though. Did something change ? If i recollect i was able to import TFBertTokenizer too in the past. i also happen to check the code base and the class TFBertTokenizer still exists as part of the transformer package Steps to Reproduce: 1) pip install transformers 2) In a new shell execute the below statement 'from transformers import TFBertTokenizer' ### Expected behavior TFBertTokenizer should be imported similar to BertTokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19857/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19856/comments
https://api.github.com/repos/huggingface/transformers/issues/19856/events
https://github.com/huggingface/transformers/pull/19856
1,421,845,883
PR_kwDOCUB6oc5Bd80e
19,856
Removing BertConfig inheritance from configuration_roberta.py
{ "login": "soma2000-lang", "id": 56045049, "node_id": "MDQ6VXNlcjU2MDQ1MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/56045049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soma2000-lang", "html_url": "https://github.com/soma2000-lang", "followers_url": "https://api.github.com/users/soma2000-lang/followers", "following_url": "https://api.github.com/users/soma2000-lang/following{/other_user}", "gists_url": "https://api.github.com/users/soma2000-lang/gists{/gist_id}", "starred_url": "https://api.github.com/users/soma2000-lang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soma2000-lang/subscriptions", "organizations_url": "https://api.github.com/users/soma2000-lang/orgs", "repos_url": "https://api.github.com/users/soma2000-lang/repos", "events_url": "https://api.github.com/users/soma2000-lang/events{/privacy}", "received_events_url": "https://api.github.com/users/soma2000-lang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19856). All of your documentation changes will be reflected on that endpoint." ]
1,666
1,666
1,666
CONTRIBUTOR
null
Related to https://github.com/huggingface/transformers/issues/19303 Pinging @sgugger for review
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19856", "html_url": "https://github.com/huggingface/transformers/pull/19856", "diff_url": "https://github.com/huggingface/transformers/pull/19856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19856.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19855/comments
https://api.github.com/repos/huggingface/transformers/issues/19855/events
https://github.com/huggingface/transformers/pull/19855
1,421,841,586
PR_kwDOCUB6oc5Bd75H
19,855
Removing BertConfig inheritance from RoBERTa configuration
{ "login": "soma2000-lang", "id": 56045049, "node_id": "MDQ6VXNlcjU2MDQ1MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/56045049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soma2000-lang", "html_url": "https://github.com/soma2000-lang", "followers_url": "https://api.github.com/users/soma2000-lang/followers", "following_url": "https://api.github.com/users/soma2000-lang/following{/other_user}", "gists_url": "https://api.github.com/users/soma2000-lang/gists{/gist_id}", "starred_url": "https://api.github.com/users/soma2000-lang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soma2000-lang/subscriptions", "organizations_url": "https://api.github.com/users/soma2000-lang/orgs", "repos_url": "https://api.github.com/users/soma2000-lang/repos", "events_url": "https://api.github.com/users/soma2000-lang/events{/privacy}", "received_events_url": "https://api.github.com/users/soma2000-lang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19855). All of your documentation changes will be reflected on that endpoint." ]
1,666
1,666
1,666
CONTRIBUTOR
null
Related to https://github.com/huggingface/transformers/issues/19303
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19855", "html_url": "https://github.com/huggingface/transformers/pull/19855", "diff_url": "https://github.com/huggingface/transformers/pull/19855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19855.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19854/comments
https://api.github.com/repos/huggingface/transformers/issues/19854/events
https://github.com/huggingface/transformers/pull/19854
1,421,796,771
PR_kwDOCUB6oc5Bdyfr
19,854
Simple PyPlot for beginners
{ "login": "Kanwal-Misbah", "id": 102029983, "node_id": "U_kgDOBhTanw", "avatar_url": "https://avatars.githubusercontent.com/u/102029983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kanwal-Misbah", "html_url": "https://github.com/Kanwal-Misbah", "followers_url": "https://api.github.com/users/Kanwal-Misbah/followers", "following_url": "https://api.github.com/users/Kanwal-Misbah/following{/other_user}", "gists_url": "https://api.github.com/users/Kanwal-Misbah/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kanwal-Misbah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kanwal-Misbah/subscriptions", "organizations_url": "https://api.github.com/users/Kanwal-Misbah/orgs", "repos_url": "https://api.github.com/users/Kanwal-Misbah/repos", "events_url": "https://api.github.com/users/Kanwal-Misbah/events{/privacy}", "received_events_url": "https://api.github.com/users/Kanwal-Misbah/received_events", "type": "User", "site_admin": false }
[ { "id": 4720676470, "node_id": "LA_kwDOCUB6oc8AAAABGV_Odg", "url": "https://api.github.com/repos/huggingface/transformers/labels/spam", "name": "spam", "color": "fbca04", "default": false, "description": "Hacktoberfest spam" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Spam" ]
1,666
1,666
1,666
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19854", "html_url": "https://github.com/huggingface/transformers/pull/19854", "diff_url": "https://github.com/huggingface/transformers/pull/19854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19854.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19853/comments
https://api.github.com/repos/huggingface/transformers/issues/19853/events
https://github.com/huggingface/transformers/issues/19853
1,421,710,132
I_kwDOCUB6oc5UvZM0
19,853
setting pipeline `tokenizer.pad_token_id` - bug / mistake in error message
{ "login": "morrisalp", "id": 8263996, "node_id": "MDQ6VXNlcjgyNjM5OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morrisalp", "html_url": "https://github.com/morrisalp", "followers_url": "https://api.github.com/users/morrisalp/followers", "following_url": "https://api.github.com/users/morrisalp/following{/other_user}", "gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}", "starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions", "organizations_url": "https://api.github.com/users/morrisalp/orgs", "repos_url": "https://api.github.com/users/morrisalp/repos", "events_url": "https://api.github.com/users/morrisalp/events{/privacy}", "received_events_url": "https://api.github.com/users/morrisalp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil and @gante ", "@morrisalp can you share a more detailed example ?\r\n\r\nThe following code seems to work:\r\n\r\n```python\r\nfrom transformers import pipeline\r\ngenerator = pipeline(\"text-generation\", model=\"EleutherAI/gpt-neo-1.3B\", device=0)\r\ngenerator.tokenizer.pad_token_id = generator.model.config.eos_token_id\r\n```", "@Narsil Your code throws an error on my end. I receive this error:\r\n\r\n> ---------------------------------------------------------------------------\r\n> TypeError Traceback (most recent call last)\r\n> Cell In [4], line 1\r\n> ----> 1 generator.tokenizer.pad_token_id = generator.model.config.eos_token_id\r\n> \r\n> File /disk2/morrisalper/notebooks/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1169, in SpecialTokensMixin.pad_token_id(self, value)\r\n> 1167 @pad_token_id.setter\r\n> 1168 def pad_token_id(self, value):\r\n> -> 1169 self._pad_token = self.convert_tokens_to_ids(value)\r\n> \r\n> File /disk2/morrisalper/notebooks/env/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:251, in PreTrainedTokenizerFast.convert_tokens_to_ids(self, tokens)\r\n> 248 return self._convert_token_to_id_with_added_voc(tokens)\r\n> 250 ids = []\r\n> --> 251 for token in tokens:\r\n> 252 ids.append(self._convert_token_to_id_with_added_voc(token))\r\n> 253 return ids\r\n> \r\n> TypeError: 'int' object is not iterable\r\n> ", "I can also confirm that the snippet shared above works on my end -- @morrisalp, if possible, can you update `transformers` to a more recent version? I'm afraid we can't fix bugs on older versions :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
NONE
null
### System Info transformers version 4.18.0, Python 3.8.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run text generation pipeline using model with tokenizer without pad_token. In my case: `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B', device=0)`. Running batched text generation:`generator(texts, ..., batch_size=8)` gives error message: "ValueError: Pipeline with tokenizer without pad_token cannot do batching. You can try to set it with `pipe.tokenizer.pad_token_id = model.config.eos_token_id`". Running `generator.tokenizer.pad_token_id = generator.model.config.eos_token_id` gives error message: `TypeError: 'int' object is not iterable` Running `generator.tokenizer.pad_token_id = '\n'` works, transformers converts the string into indice(s) internally. ### Expected behavior Right-hand-side of line setting pad_token_id should be an ID (int), not a string. Code in error message should run as-is
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19853/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19852/comments
https://api.github.com/repos/huggingface/transformers/issues/19852/events
https://github.com/huggingface/transformers/pull/19852
1,421,369,804
PR_kwDOCUB6oc5BcWe9
19,852
Add BERT resources
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I really like this!", "Awesome, I'll start on some of the other models then! In the meantime, I'll check with @mishig25 if we can use the icons in the docs :)", "We don't have an icon for multiple choice, so would it be ok to add it under question answering (since its kind of a variant of question answering)?", "It's more a variant of sequence classification technically.", "Ah ok! Would it be too confusing to include it under sequence classification then? We can also just leave it as is 😄" ]
1,666
1,667
1,667
MEMBER
null
This PR kicks off #19848 and adds official resources for BERT to the BERT model doc page. The resources are grouped by the types of tasks you can use the model for as well as using it for some other applications like inference and deployment. If possible, I think it'd also be cool to use the task icons we use on the Hub and Tasks page (see below for example). What do you think? ![Screen Shot 2022-10-24 at 1 26 25 PM](https://user-images.githubusercontent.com/59462357/197622917-a9978857-e0b4-4d42-b066-a4b44574c792.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19852", "html_url": "https://github.com/huggingface/transformers/pull/19852", "diff_url": "https://github.com/huggingface/transformers/pull/19852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19852.patch", "merged_at": 1667326193000 }
https://api.github.com/repos/huggingface/transformers/issues/19851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19851/comments
https://api.github.com/repos/huggingface/transformers/issues/19851/events
https://github.com/huggingface/transformers/pull/19851
1,421,340,940
PR_kwDOCUB6oc5BcQJQ
19,851
Vilt support v1.9
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
Skips the tests if the installed torch version is lower than 1.10. Partially fixes https://github.com/huggingface/transformers/issues/18817
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19851", "html_url": "https://github.com/huggingface/transformers/pull/19851", "diff_url": "https://github.com/huggingface/transformers/pull/19851.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19851.patch", "merged_at": 1666706375000 }
https://api.github.com/repos/huggingface/transformers/issues/19850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19850/comments
https://api.github.com/repos/huggingface/transformers/issues/19850/events
https://github.com/huggingface/transformers/pull/19850
1,421,225,134
PR_kwDOCUB6oc5Bb24z
19,850
Support Roberta on `accelerate`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Closing in favor of #19906" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR adds `Roberta` model family support with `accelerate` ! This aims to support `int8` quantization for these models. Before merging I have notice few nits that I need to fix and discuss! I am unsure whether these fixes should be here or in `accelerate`. 1- If the model has the attribute `_keys_to_ignore_on_save`, it seems that it does not get properly initialized by `accelerate` (but I might be missing something here). AFAIK all models that has `accelerate` support for now, has at most the attribute `_keys_to_ignore_on_load_missing` but not `_keys_to_ignore_on_save`. Therefore [when the base model gets saved in the `accelerate` test](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/models/roberta/test_modeling_roberta.py#L587), these parameters does not get saved in the state_dict. I had to come up with a modification in the `_load_pretrained_model` function to randomly initialize these parameters since they're ignored by the `_load_state_dict_into_meta_model`. This post processing trick happens [here](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/src/transformers/modeling_utils.py#L2598). 2- Therefore I had to change the `accelerate` tests. Since the parameters that are assigned in the `_keys_to_ignore_on_save` are initialized randomly, I propose to check the logits compatibility between the base model and the accelerate model only for the attention outputs and not the `lm_head` output. These modifications happens [here](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/models/roberta/test_modeling_roberta.py#L569) - maybe this modification could happen in the super class? 3- Last nit: in the `accelerate` tests, it is better to not override the variable `inputs_dict` since inside the main loop, we can switch from a `xxxForMultipleChoice` to a `xxxForQuestionAnswering` model. Therefore, this variable does not get modified by the class function `_prepare_for_class` since it [gets modified only if the model is a `MODEL_FOR_MULTIPLE_CHOICE` model.](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/test_modeling_common.py#L165-L173) and not reset to the correct `inputs_dict` afterwards. Need also to fix the slow test for `lilt` model where I am having `ValueError: embeddings.position_ids doesn't have any device set.` cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19850/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19850", "html_url": "https://github.com/huggingface/transformers/pull/19850", "diff_url": "https://github.com/huggingface/transformers/pull/19850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19850.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19849/comments
https://api.github.com/repos/huggingface/transformers/issues/19849/events
https://github.com/huggingface/transformers/pull/19849
1,421,195,491
PR_kwDOCUB6oc5BbwaL
19,849
Update `max_diff` in `test_save_load_fast_init_to_base`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you @patrickvonplaten! Although I still have some doubt:\r\n\r\nAssumption we have:\r\n> now this initialization should be the same for both the fast and slow init method\r\n\r\nWhat we do\r\n> _init_weights weights is overwritten by a deterministic self._mock_init_weights\r\n\r\nBut `_mock_init_weights` is only defined in the testing module, and basically it just does `data.fill_(3)`.\r\nSo the assumption is only True in our own testing (which uses `_mock_init_weights`). This won't be the case when we want to load the model outside the testing. So I am not very sure the purpose of this testing.\r\n\r\nBut good for me if we don't want to touch it. We probably need to add some common about the flakyness for some tests though.\r\n ", "@sgugger Could you check if the change in the way `max_diff` being calculated worth the merge 🙏 ? Thanks" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This test seems flaky. After looking a bit deeper, I am not sure if we should expect to get the same (or very close) weights with/without `_fast_init` for the deleted key. https://github.com/huggingface/transformers/blob/9ecb13d63a9524478656b2233e6fb4e9f15d3fbf/tests/test_modeling_common.py#L343 https://github.com/huggingface/transformers/blob/9ecb13d63a9524478656b2233e6fb4e9f15d3fbf/tests/test_modeling_common.py#L400 My intuition is that **the values for that deleted key could be different with 2 different init. methods.** I also change the way of `max_diff` being calculated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19849/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19849", "html_url": "https://github.com/huggingface/transformers/pull/19849", "diff_url": "https://github.com/huggingface/transformers/pull/19849.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19849.patch", "merged_at": 1666796988000 }
https://api.github.com/repos/huggingface/transformers/issues/19848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19848/comments
https://api.github.com/repos/huggingface/transformers/issues/19848/events
https://github.com/huggingface/transformers/issues/19848
1,421,156,950
I_kwDOCUB6oc5UtSJW
19,848
Add more model resources
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "\r\n\r\n@stevhliu @NielsRogge is there a plan to add the additional vision tasks we currently support to [this page](https://huggingface.co/docs/transformers/task_summary#image-classification)? For example object detection, segmentation, video classification. Then there seems to be an entire section missing on multimodal models. \r\n\r\nI understand for these tasks, we don't have `pipeline`s yet. But I believe we can still enlist these tasks with our existing resources (notebooks, blog posts, scripts, etc.).\r\n\r\nAnother suggestion is to provide consolidated model links for each of the tasks enlisted in that page. For example, for image classification, it could be https://huggingface.co/models?pipeline_tag=image-classification. \r\n\r\nWDYT? \r\n\r\nCc: @osanseviero @nateraw ", "> @stevhliu @NielsRogge is there a plan to add the additional vision tasks we currently support to [this page](https://huggingface.co/docs/transformers/task_summary#image-classification)?\r\n\r\nYes I've pinged @stevhliu yesterday about this ;) he's working on it", "Amazing! @stevhliu, if possible could tag me in the PR when you open it? Would love to take a look. ", "For sure @sayakpaul! I'm working on a proposal to reorganize/update the tasks summary docs a bit to include the additional vision/multimodal tasks you mentioned. \r\n\r\nAlso great suggestion to link to the models; we can use these recently added [icons](https://github.com/huggingface/doc-builder/pull/317) in the docs :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, I am a newbie to open source and would like to contribute. @NielsRogge can I contribute to this issue?", "Welcome @avisinghal6, we'd be more than happy for you to make a contribution! 🤗 \r\n\r\nThe remaining models available are LayoutLM, LayoutLMV2, TrOCR, and OPT. Let me know which model you'd like to take on, and any questions you might have!", "@stevhliu, I can start working on LayoutLM model. So I need to search for existing official resources on hugging face website and attach relevant ones to the documentation of LayoutLM model ?", "Yup, check out issue #20055 for a list of resources to search from and the [BERT resources section](https://huggingface.co/docs/transformers/model_doc/bert#resources) for an example of what it should look like!", "@stevhliu, I have added the resources for LayoutLM. Commit : d360021d9caef517854e371f8ac286f1c0a4f802", "Hi @avisinghal6, would you mind opening a pull request on the Transformers repository for your contribution? You can check out this [guide](https://www.notion.so/huggingface2/Contribution-Guide-19411c29298644df8e9656af45a7686d#ed2eea8d355c497d9f05474e349f9f15) for more details how. Thanks! 😄 ", "Hi @stevhliu, I have created the PR#21377", "Hi @stevhliu - I want to add resources for LayoutLMv2. Will submit a PR soon. \r\n", "Looking forward to your contribution @SarangShrivastava!", "hey, @stevhliu I want to work on any of the model if its available", "Hey @SarangShrivastava , just checking to see if you're still interested in making a model contribution. Totally cool if you aren't available anymore, I'll unassign you from the model you claimed and let someone else take a stab at it. Thanks!\r\n\r\n@rajveer43 thanks for your interest! If any of the models free up, I'll let you know 🤗", "Hello @stevhliu I would like to make my first contribution to transformers and this looks like a great place to start😄. Is there any way I can help? I can see all the models have been assigned is it possible to work on a model already assigned? Thanks in advance", "Thanks for the interest; ~TrOCR~, ~LayoutLMV2~, and ~ALBERT~ are now available!", "@stevhliu I would like to start working on LayoutLMV2. ", "@stevhliu I have a question regarding 🌎 emoji. Exactly which resources should be marked by it?\r\nThanks in advance", "🌎 is for unofficial Hugging Face resources, such as community-created ones. For example, if you look at the GPT2 [resources](https://huggingface.co/docs/transformers/v4.28.1/en/model_doc/gpt2#resources), there are links to notebooks for generating lyrics and tweets from contributors :)", "@stevhliu thanks for the response. I noticed in BERT resources under text-classification a notebook by NielsRogge is not marked 🌎. Are [these notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) considered official?", "Hello @stevhliu I would like to make my first contribution to open source transformers and this looks like a great place to start. Is there any way I can help? Thanks in advance", "Hi, thanks for your interest @unitinguncle! Currently, all the models are taken but I'll let you know if something is available. In the meantime, feel free to also take a look at some of the [Good First Issue's](https://github.com/huggingface/transformers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22) to see if there is anything else you may be interested in!", "Hello @stevhliu I would like to make my first contribution to transformers and this looks like a great place to start😄. Is there any way I can make a contribution here? Thanks in advance\r\n\r\n", "Thanks for your interest @raj-pandey55! It looks like ALBERT is the only model left to complete. @ENate, are you still interested in making a contribution? If not, I'll reassign it to @raj-pandey55 🙂 ", "@stevhliu I am still interested and hope to finish up the doc and open a pull request soon. Sorry for the delay (I am available again after some pause). I actually noticed that there is an issue with building the doc and commented already. Will do so soon.", "I commented about the issue under the link\r\n\r\n```https://github.com/PyAV-Org/PyAV/issues/1140 ```\r\nand will hope to at least the get the doc build. Thanks", "Thanks for letting me know @ENate . @stevhliu if any new opportunity comes up, please let me know🤗. Thanks in advance ", "Hi there! 👋\r\n\r\nHello @stevhliu, I'm new to open source contributions, and I'm excited to start my journey by helping out here. I'm eager to learn and collaborate with the community. If there are any specific areas in the documentation that need attention, please let me know, and I'd be happy to take them on." ]
1,666
1,700
1,700
MEMBER
null
A continuation of #19767 to add existing official resources (blog posts, notebooks, scripts, etc.) directly to the model docs for 20 of the most popular architectures based on last month's pageviews. I'm not sure whether there are existing resources for all of these models, but I'll check it out, and if not we can either: * Move to the next most popular model * Or it could be a good opportunity to create some resources for it Tracking model progress and updates: - [x] BERT (see #19852) - [x] T5 (see #19878) - [x] RoBERTa (see #19911) - [x] GPT2 (see #19879) - [x] BLOOM (see #19881) - [x] BART (see #19928) - [x] ViT (assigned to @stanleycai95) - [x] DistilBERT (see #19930) - [x] Wav2Vec2 (see #19931) - [x] LayoutLMV3 (see #19932) - [x] CLIP (assigned to @ambujpawar, see #20190) - [x] LayoutLM (assigned to @avisinghal6) - [x] GPT-J (assigned to @adit299) - [x] TrOCR (assigned to @huangperry) - [x] LayoutLMV2 (assigned to @y3sar) - [x] ALBERT (assigned to @ENate) - [x] OPT (assigned to @alissadb) - [x] DeBERTa (assigned to @Saad135, see #20155) - [x] OpenAI GPT (assigned to @shogohida, see #20084) - [x] XLM-RoBERTa (assigned to @hazrulakmal)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19848/reactions", "total_count": 4, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19848/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19847/comments
https://api.github.com/repos/huggingface/transformers/issues/19847/events
https://github.com/huggingface/transformers/pull/19847
1,421,131,500
PR_kwDOCUB6oc5BbiqK
19,847
Small update to model addition guide
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
This PR is a smaller version of #19778 (shelved for now) which includes only the more important fixes for maintaining accuracy: * remove *call-for-model-addition* program * `cookiecutter` adds a `mdx` instead of a `rst` file * fix to do list so it doesn't have numbers and bullets
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19847/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19847", "html_url": "https://github.com/huggingface/transformers/pull/19847", "diff_url": "https://github.com/huggingface/transformers/pull/19847.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19847.patch", "merged_at": 1666631900000 }
https://api.github.com/repos/huggingface/transformers/issues/19846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19846/comments
https://api.github.com/repos/huggingface/transformers/issues/19846/events
https://github.com/huggingface/transformers/pull/19846
1,421,072,783
PR_kwDOCUB6oc5BbWNU
19,846
Fix warning when collating list of numpy arrays
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? As reported in #19822 there is a warning issued by PyTorch when we try to batch the result of a feature extractor when `return_tensors="pt"` is not activated. This PR fixes it by stacking the list of NumPy array in a big array first. Fixes #19822
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19846", "html_url": "https://github.com/huggingface/transformers/pull/19846", "diff_url": "https://github.com/huggingface/transformers/pull/19846.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19846.patch", "merged_at": 1666875639000 }
https://api.github.com/repos/huggingface/transformers/issues/19845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19845/comments
https://api.github.com/repos/huggingface/transformers/issues/19845/events
https://github.com/huggingface/transformers/pull/19845
1,421,029,991
PR_kwDOCUB6oc5BbNAv
19,845
Fix doctest for MarkupLM
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? The CI complains ``` UNEXPECTED EXCEPTION: SyntaxError('EOF while scanning triple-quoted string literal', ('<doctest markuplm.mdx[2]>', 7, 96, 'html_string = """\n <!DOCTYPE html>\n <html>\n <head>\n <title>Hello world</title>\n </head>\n <body>\n')) ``` due to the lack of `...` in some empty lines. But `make style` will remove those `...` after I add it to those lines. So I just removed those empty lines to pass the doctest.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19845", "html_url": "https://github.com/huggingface/transformers/pull/19845", "diff_url": "https://github.com/huggingface/transformers/pull/19845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19845.patch", "merged_at": 1666626863000 }
https://api.github.com/repos/huggingface/transformers/issues/19844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19844/comments
https://api.github.com/repos/huggingface/transformers/issues/19844/events
https://github.com/huggingface/transformers/issues/19844
1,421,007,669
I_kwDOCUB6oc5Usts1
19,844
ImportError: libssl.so.3: cannot open shared object file: No such file or directory
{ "login": "efima-ai", "id": 95485781, "node_id": "U_kgDOBbD_VQ", "avatar_url": "https://avatars.githubusercontent.com/u/95485781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/efima-ai", "html_url": "https://github.com/efima-ai", "followers_url": "https://api.github.com/users/efima-ai/followers", "following_url": "https://api.github.com/users/efima-ai/following{/other_user}", "gists_url": "https://api.github.com/users/efima-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/efima-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/efima-ai/subscriptions", "organizations_url": "https://api.github.com/users/efima-ai/orgs", "repos_url": "https://api.github.com/users/efima-ai/repos", "events_url": "https://api.github.com/users/efima-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/efima-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, same problem, how to fix this?", "After install transformers with anaconda on Ubuntu 22 my scripts wont run anymore with tons of these ImportErrors: \r\n(ie.) libffi.so.7: cannot open shared object file: No such file or directory\r\nRemoved and created a new environment with fresh install of dependancies.", "Downgrading to `tokenizers=0.10.3` seems to work as an interim fix. The main issue appears to be from a conflict between some libraries (e.g. TF) requiring OpenSSL<=1.1.1, and tokenizers using OpenSSL3.\r\n\r\nI think that this can be circumvented by using transformers from pip or source, although in my particular project conda/mamba is a stricter requirement.\r\n\r\nSolution is basically discussed here: https://discuss.huggingface.co/t/importing-tokenizers-version-0-10-3-fails-due-to-openssl/17820/3\r\n\r\n(@theRealMachineWhisperer, I can imagine your frustration, but please be kind to open source developers and try to frame your challenges constructively <3)", "Downgrading the tokenizers leads to downgrading the transformers library as well; So if you needed to use a model/feature in the newer version, it would not work. I fixed this issue by using pip to install transformers (instead of conda).", "Fixed my issue by running apt install libffi7", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Downgrading the tokenizers leads to downgrading the transformers library as well; So if you needed to use a model/feature in the newer version, it would not work. I fixed this issue by using pip to install transformers (instead of conda).\r\n\r\nthanks, i tried\r\npip install transformers\r\nand it works.", "> \r\n\r\nYou can also try `pip install transformers --force-reinstall`\r\n\r\nI've found this can help refresh dependency versions. Do note, there may be an underlying compatibility concern in your environment." ]
1,666
1,677
1,673
NONE
null
### System Info Hi, I get the following error when calling `from transformers import BertModel, BertTokenizer`. The error: `ImportError: libssl.so.3: cannot open shared object file: No such file or directory` I tried the following (suggested in other similar threads): - instead of conda installation, use pip - downgrade tokenizers package Neither of those fixed the issue. Following versions were used: - Python 3.8.13 - tokenizers 0.13.1 - transformers 4.23.1 Any ideas how to fix this? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `from transformers import BertModel, BertTokenizer` ### Expected behavior No ImportError is expeted.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19844/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19844/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19843/comments
https://api.github.com/repos/huggingface/transformers/issues/19843/events
https://github.com/huggingface/transformers/issues/19843
1,420,921,133
I_kwDOCUB6oc5UsYkt
19,843
Pretraining score (bpc) does not decrease after pretraining saving and loading a Longformer model.
{ "login": "muxitox", "id": 26285625, "node_id": "MDQ6VXNlcjI2Mjg1NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/26285625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muxitox", "html_url": "https://github.com/muxitox", "followers_url": "https://api.github.com/users/muxitox/followers", "following_url": "https://api.github.com/users/muxitox/following{/other_user}", "gists_url": "https://api.github.com/users/muxitox/gists{/gist_id}", "starred_url": "https://api.github.com/users/muxitox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muxitox/subscriptions", "organizations_url": "https://api.github.com/users/muxitox/orgs", "repos_url": "https://api.github.com/users/muxitox/repos", "events_url": "https://api.github.com/users/muxitox/events{/privacy}", "received_events_url": "https://api.github.com/users/muxitox/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @muxitox. If I understand correctly, the issue is about training->saving->loading->eval won't give the same result as the eval result before saving.\r\n\r\nCould you provide a self-contained complete code snippet that can reproduce the issue? You probably don't need a full training (we can't debug if the training needs 2 days). \r\n\r\n- By `self-contained complete`, it means we can copy a single block of code and run it directly.\r\n- Could you try with a much smaller model that runs on a single GPU? You can also feed the model with shorter input sequence.\r\n\r\nThanks a lot!\r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry for the delay. Did the experiment you suggested. In code_reqs.zip you can find the `convert_model_to_long_updated_github.py` script with a minimal setup to reproduce what you said in one gpu.\r\n\r\nYou can also find in requirements.txt all the packages installed in the environment and in requirements_minimal.txt the minimal requirements I think you should need to make this going.\r\n\r\n[code_reqs.zip](https://github.com/huggingface/transformers/files/10122396/code_reqs.zip)\r\n\r\nI have obtained the following results:\r\n\r\n`{\"ini_bpc\": 1.6242934465408325, \"end_bpc\": 1.5721702575683594, \"end_bpc_review\": 1.5611544847488403, \"proj_end_bpc\": 1.5611544847488403, \"bef_proj_end_bpc\": 1.5611544847488403, \"roberta_ini_bpc\": 1.4125990867614746, \"roberta_end_bpc\": 1.0448211431503296, \"roberta_end_bpc_review\": 1.0905007123947144}`\r\n\r\nWhere `ini_bpc` is the bpc before pre-training the longformer, `end_bpc` is the bpc after pre-training, `end_bpc_review` is the bpc after saving and reloading the pre-trained model, `bef_proj_end_bpc` is the bpc before manually modifying the global attention as suggested in the Notebook in the original Longformer repository and `proj_end_bpc` is the bpc after modifying the global attention. Same naming convention applies for the RoBERTa model. \r\n\r\nWe can see that after training for 1 step on the dev split the bpc diminishes after pre-training. After re-loading, `end_bpc_review` keeps being lower even though its not exactly the same as in `end_bpc` (I think due to the data collator masking different tokens each evaluation). \r\n\r\nYou can reproduce this executing the following line of code after laoding the installed environment:\r\n\r\n`python convert_model_to_long_updated_github.py --output_dir issue_results --per_device_eval_batch_size 8 --per_device_train_batch_size 2 --gradient_accumulation_steps 1 --seed 1 --learning_rate 0.00003`\r\n\r\nThe behavior we observe for this script is the expected one I think, though in my original multi-node multi-gpu set-up the problem was that end_bpc_review = ini_bpc.\r\n\r\nIn that case, this is the way I invoked the .py script (which is different than the one I shared to account for multi-node multi-gpu distribution):\r\n```\r\npython -m torch.distributed.run $DIST_ARGS ../../convert_model_to_long_updated.py \\\r\n --dataset_name_or_loading_script $dataset \\\r\n --separate_documents \\\r\n --model_name_or_path $model \\\r\n --seed $SEED \\\r\n --warmup_steps 500 \\\r\n --learning_rate $lr \\\r\n --weight_decay \"0.01\" \\\r\n --adam_epsilon \"1e-6\" \\\r\n --logging_steps 500 \\\r\n --save_strategy steps \\\r\n --save_steps 6500 \\\r\n --max_grad_norm \"5.0\" \\\r\n --per_device_eval_batch_size 8 \\\r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps $GRADIENT_ACCUM \\\r\n --evaluation_strategy steps \\\r\n --eval_steps 500 \\\r\n --do_train \\\r\n --do_eval \\\r\n --cache_dir $CACHE_DIR \\\r\n --output_dir $OUTPUT_DIR \\\r\n --logging_dir $LOGGING_DIR \\\r\n --max_steps $STEPS \\\r\n --log_on_each_node False \\\r\n --save_on_each_node False \\\r\n```\r\nI do not know if I set up something wrong or if indeed there is some issue in the saving process in multi-node settings.\r\n\r\nJust for context, I work in a cluster with 2 AMD GPU Instinct™ MI50 per node, so we use torch for ROCM5.1.1.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,675
1,675
NONE
null
### System Info Python 3.7.4 torch==1.12.0+rocm5.1.1 torchaudio==0.12.0+rocm5.1.1 torchmetrics==0.8.0 torchvision==0.13.0+rocm5.1.1 transformers==4.20.1 datasets==2.3.2 ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am using a modified version of the [notebook](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) provided in the official Longformer [repository](https://github.com/allenai/longformer). I had to change the recommended versions in that repository in order to be able to load with a specific model the recommended Transformers version would not load properly. This caused the need for slight modifications in the code in order to make it suitable for newer Transformers and Datasets versions. Dataset handling was also modified in order to use a custom dataset. I also changed the code in order to convert the roberta-4096 to a Longformer-4096 so it can be used more easily for downstream tasks. However, the main body of the my code very similar to theirs. The main problem comes after pretraining and saving the model in the following piece of code. ` """### Pretrain and Evaluate on masked language modeling (MLM) The following functions pretrain and evaluate a model on MLM. """ def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path, logger, max_length=512, separate_documents=False, cache_dir=None): data_files = {} data_files["validation"] = args.val_datapath if not eval_only: # data_files["train"] = args.val_datapath data_files["train"] = args.train_datapath extension = "text" datasets = load_dataset(extension, data_files=data_files, cache_dir=cache_dir) val_dataset = load_and_preprocess(datasets["validation"], tokenizer, max_length, separate_documents, logger) if eval_only: train_dataset = val_dataset else: logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}') train_dataset = val_dataset # TODO: remove this comment # train_dataset = load_and_preprocess(datasets["train"], tokenizer, max_length, separate_documents, logger) logger.info(f'Dataset loaded and pre-processed') logger.info(f'Set up Trainer...:') data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) trainer = Trainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset) # train_dataset=train_dataset, eval_dataset=val_dataset, prediction_loss_only=True,) logger.info(f'Initial evaluation...:') eval_loss_pre = trainer.evaluate() eval_loss_pre_int = eval_loss_pre['eval_loss'] logger.info(f'Initial eval bpc: {eval_loss_pre_int/math.log(2)}') bpc_results_in = {"ini_bpc": eval_loss_pre_int} if not eval_only: logger.info(f"Start pretraining from checkpoint {model_path}") trainer_results = trainer.train(resume_from_checkpoint=model_path) logger.info(f"Trainer results metrics: {trainer_results.metrics}") logger.info(f"Saving after pretraining in {model_path}") trainer.save_model(output_dir=model_path) logger.info(f'Mid evaluation...:') eval_loss_post = trainer.evaluate() eval_loss_post_int = eval_loss_post['eval_loss'] logger.info(f'Eval bpc after pretraining: {eval_loss_post_int/math.log(2)}') bpc_results_in["end_bpc"] = eval_loss_post_int model_copy = LongformerForMaskedLM.from_pretrained(model_path) trainer_copy = Trainer(model=model_copy, args=args, data_collator=data_collator,train_dataset=train_dataset, eval_dataset=val_dataset) eval_loss_pre_copy = trainer_copy.evaluate() eval_loss_pre_copy_int = eval_loss_pre_copy['eval_loss'] logger.info(f'Eval bpc after pretraining and loading: {eval_loss_pre_copy_int/math.log(2)}') return bpc_results_in ` Where the variable model has been previously loaded as: ` tokenizer = LongformerTokenizerFast.from_pretrained(model_path_longformer, cache_dir=model_args.cache_dir, model_max_length=4096) model = LongformerForMaskedLM.from_pretrained(model_path_longformer, cache_dir=model_args.cache_dir) ` And the function is called as: ` bpc_results = pretrain_and_evaluate(training_args, model, tokenizer, eval_only=False, model_path=model_path_longformer, logger=logger, cache_dir=model_args.cache_dir, max_length=model_args.max_pos, separate_documents=data_args.separate_documents) ` As you may observe I am: - Making and initial evaluation of the model (BPC0). - Pretraining the model and saving it. I am pretraining one step on the validation set in order to reduce computation. - Making an evaluation of the model (BPCa). - Loading the pretraining model another evaluation (BPCb). Since I am in a multi-node multi-gpu setting, I am using torch.distributed.run to launch the script. I am also setting these variables to False in the script arguments: ` --log_on_each_node False \ --save_on_each_node False \ ` ### Expected behavior I am trying to pre-train a Longformer on a custom text dataset starting from a RoBERTa checkpoint as indicated in the Longformer repository. From a normal execution, one would expect BPC0 > BPCa and BPCa == BPCb. However, what I am experiencing is BPC0 (2.0194287300109863) > BPCa (2.016040325164795) and BP0 == BPCb (2.0194287300109863). Which means that the pretraining seems to be working succesfully but somehow after saving and loading the model, the weights are not being managed/saved properly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19843/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19842/comments
https://api.github.com/repos/huggingface/transformers/issues/19842/events
https://github.com/huggingface/transformers/issues/19842
1,420,921,132
I_kwDOCUB6oc5UsYks
19,842
Un-pin JAX from <= 0.3.6
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false } ]
[ "Until the Jax/Flax team guarantees they will maintain compatibility in their ecosystem during releases, they should be an upper pin to avoid sudden breaks of the CI.", "Very much agree @sgugger! I'll maintain an upper-bound on the versions of JAX and JAX derived libraries to ensure there aren't minor releases that break our CI 🤗 We can then deal with new versions of JAX on a version-by-version basis.", "[](url)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closed by https://github.com/huggingface/transformers/pull/24791" ]
1,666
1,691
1,691
CONTRIBUTOR
null
### Feature request JAX was pinned to <= 0.3.6 in #16808 when a minor release of JAX was published that broke Optax: https://github.com/huggingface/transformers/blob/8b2501b4b9c5afad0dca2c964bc31b9fca09df4e/setup.py#L123 There have been 18 subsequent minor releases since (_c.f._ https://jax.readthedocs.io/en/latest/changelog.html), with v0.3.24 the latest. We should un-pin this hard requirement to allow users to install the latest version of JAX with Transformers. ### Motivation Should a user have JAX == 0.3.24 installed and then tries to install Transformers using pip: ``` pip install transformers[flax] ``` JAX is downgraded to 0.3.6 due to the pinning requirement. Should a user be using the latest JAX features, this then requires JAX to be _re-upgraded_ to 0.3.24: ``` pip install -U jax ``` The same holds true for JAX dependencies (e.g. Flax or Optax). ### Your contribution Investigate all versions of JAX/Flax/Optax triplets compatible with Transformers. Black-list those that break the CI, otherwise allow users to install that version of JAX in `setup.py`. cc @cgarciae
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19842/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19841/comments
https://api.github.com/repos/huggingface/transformers/issues/19841/events
https://github.com/huggingface/transformers/pull/19841
1,420,824,635
PR_kwDOCUB6oc5BahIv
19,841
Update expected values
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? PR #19654 changed some string literals in `LEDModelIntegrationTests.test_seq_to_seq_generation` (Use of `r"""`), which gives different outputs. I think the inputs are different with/without `r"""`, so I just update the expect values in this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19841/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19841", "html_url": "https://github.com/huggingface/transformers/pull/19841", "diff_url": "https://github.com/huggingface/transformers/pull/19841.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19841.patch", "merged_at": 1666620326000 }
https://api.github.com/repos/huggingface/transformers/issues/19840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19840/comments
https://api.github.com/repos/huggingface/transformers/issues/19840/events
https://github.com/huggingface/transformers/pull/19840
1,420,747,260
PR_kwDOCUB6oc5BaQfg
19,840
Fix OOM in config doctest
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This model is too large to be tested (OOM). ```python >>> # Initializing a model (with random weights) from the gpt-neox-20b style configuration >>> model = GPTNeoXModel(configuration) # doctest: +SKIP ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19840", "html_url": "https://github.com/huggingface/transformers/pull/19840", "diff_url": "https://github.com/huggingface/transformers/pull/19840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19840.patch", "merged_at": 1666618381000 }
https://api.github.com/repos/huggingface/transformers/issues/19839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19839/comments
https://api.github.com/repos/huggingface/transformers/issues/19839/events
https://github.com/huggingface/transformers/pull/19839
1,420,676,245
PR_kwDOCUB6oc5BaBNh
19,839
[WIP] Fix edge cases in TopPLogitsWarper when top_p equals 0 or 1
{ "login": "NinedayWang", "id": 45553486, "node_id": "MDQ6VXNlcjQ1NTUzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/45553486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NinedayWang", "html_url": "https://github.com/NinedayWang", "followers_url": "https://api.github.com/users/NinedayWang/followers", "following_url": "https://api.github.com/users/NinedayWang/following{/other_user}", "gists_url": "https://api.github.com/users/NinedayWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/NinedayWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NinedayWang/subscriptions", "organizations_url": "https://api.github.com/users/NinedayWang/orgs", "repos_url": "https://api.github.com/users/NinedayWang/repos", "events_url": "https://api.github.com/users/NinedayWang/events{/privacy}", "received_events_url": "https://api.github.com/users/NinedayWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19839). All of your documentation changes will be reflected on that endpoint.", "Hi @NinedayWang 👋 \r\n\r\nThe edge cases are intentionally breaking -- `top_p=0.0` means in theory that no token can be sampled (and in practice, that they have the same odds), while `top_p=1.0` is equivalent to not having the `top_p` operation.\r\n\r\nI struggle to see a use case for these changes, but I'm open to suggestions :)", "\r\n\r\n> Hi @NinedayWang 👋\r\n> \r\n> The edge cases are intentionally breaking -- `top_p=0.0` means in theory that no token can be sampled (and in practice, that they have the same odds), while `top_p=1.0` is equivalent to not having the `top_p` operation.\r\n> \r\n> I struggle to see a use case for these changes, but I'm open to suggestions :)\r\n\r\nThanks for your quick reply. @gante \r\n\r\nI think if the changes are not made, there will be the following problems:\r\n1. When `do_sample=True` and `top_p=0`:\r\n(1) In PyTorch, `TopPLogitsWarper` will filter any logit to be `-inf`, causing us to get logits filled with `nan` after softmax, then the program will throw a `nan` exception on `multinomial` operation as follows:\r\n\t```\r\n\tTraceback (most recent call last):\r\n\t File \"eval_human_eval_wx.py\", line 118, in <module>\r\n\t evaluate_on_human_eval(\r\n\t File \"eval_human_eval_wx.py\", line 98, in evaluate_on_human_eval\r\n\t gen_results = run_code_generation(pipe, input_prompt, num_completions=generate_batch_size, **gen_kwargs)\r\n\t File \"eval_human_eval_wx.py\", line 44, in run_code_generation\r\n\t code_gens = pipe(prompt,\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/text_generation.py\", line 187, in __call__\r\n\t return super().__call__(text_inputs, **kwargs)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1074, in __call__\r\n\t return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1081, in run_single\r\n\t model_outputs = self.forward(model_inputs, **forward_params)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 990, in forward\r\n\t model_outputs = self._forward(model_inputs, **forward_params)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/text_generation.py\", line 229, in _forward\r\n\t generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n\t return func(*args, **kwargs)\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/generation_utils.py\", line 1422, in generate\r\n\t return self.sample(\r\n\t File \"/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/generation_utils.py\", line 2071, in sample\r\n\t next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n\tRuntimeError: probability tensor contains either `inf`, `nan` or element < 0\r\n\t```\r\n In the 4.22 version of transformers, I used the setting of `do_sample=True` and `top_p=0` ​​to make my program do greedy decoding, because the implementation in version 4.22 uses a right shift operation to make sure that there is at least one token left. But when I updated the version to the latest, I got the above exception and took some time to find the reason.\r\n\r\n **The above is the main reason for submitting this PR. I agree that \"top_p=0.0 means in theory that no token can be sampled\", but in practice, we will encounter a `nan` error with no explicit error message, rather than sample each token with the same odds.**\r\n\r\n (2) In TF and FLAX, the implementation of `TopPLogitsWarper` in both frameworks uses a right-shift operation to achieve top-scoring token preservation, producing the same results as greedy decoding. So using the settings of `do_sample=True` and `top_p=0` will not report an error, which is different from the PyTorch framework. **The behavior of the three frameworks is different now.**\r\n\r\n2. When `do_sample=True` and `top_p=1`:\r\nYes, setting `top_p=1` is equivalent to not having the top_p operation, and it does not cause problems other than significantly reducing generation performance. **My change here is more to keep consistency with the error message of `ValueError`.** \r\n\r\n Additionally, the developers have ensured that `TopPLogitsWarper` will only be performed if `top_p<1` in the `_get_logits_warper` function of `generation_utils.py`:\r\n \t```\r\n\tif top_p is not None and top_p < 1.0:\r\n\t warpers.append(TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=(2 if num_beams > 1 else 1)))\r\n\t```\r\n I just thought it would be better to be consistent.", "@NinedayWang thank you for elaborating -- that makes sense! :) \r\n\r\nI'd like to suggest two modifications, to then merge the PR:\r\n1. Regarding making the check on `top_p` more strict: because our library is used in production, we must avoid breaking changes whenever we can (which would be the case). Instead, can we raise a warning when `top_p` is either `0.0` (degenerated to argmax token selection) or `1.0` (redundant operation)?\r\n2. Regarding the case where `top_p=0.0`: we both actually forgot the other argument in the discussion above, `min_tokens_to_keep`, which is not being respected. To fix it in this edge case, changing `if self.min_tokens_to_keep > 1:` to `if self.min_tokens_to_keep > 0:` OR forcing `min_tokens_to_keep` to be strictly positive (and removing this `if`) is enough. This change will also make the edge case return back to a greedy decoding-like behavior, as in TF and FLAX, with no exceptions being thrown.", "@gante Thanks for your advice! I will work on this :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NinedayWang Are you planning to continue this work? :)", "> @NinedayWang Are you planning to continue this work? :)\r\n\r\n@gante Terribly sorry! Other urgent matters delayed this work. I will continue it and update the progress soon!", "@NinedayWang no worries, take your time :D " ]
1,666
1,672
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR fixes edge cases in TopPLogitsWarper when top_p equals 0 or 1. The following are specific contributions: 1. Make the code consistent with the `ValueError` error message. 2. Fix the `nan` problem. In the original PyTorch implementation, if `do_sample=True` and `top_p=0`, then each position in logits will be set to `-float("Inf")`, and then the `nan` problem will be encountered after softmax. 3. Make the three frameworks (PyTorch/TF/FLAX) have the same behavior. In the original implementation, when `do_sample=True` and `top_p=0`, the PyTorch framework would encounter `nan` errors, while TF and FLAX would keep the highest scoring token due to their right shift operations. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19839/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19839", "html_url": "https://github.com/huggingface/transformers/pull/19839", "diff_url": "https://github.com/huggingface/transformers/pull/19839.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19839.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19838/comments
https://api.github.com/repos/huggingface/transformers/issues/19838/events
https://github.com/huggingface/transformers/pull/19838
1,420,606,053
PR_kwDOCUB6oc5BZx61
19,838
Add padding image transformation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19838). All of your documentation changes will be reflected on that endpoint.", "@NielsRogge I've asked for re-review as the logic has changed a bit from when you first reviewed. ", "> Thanks for adding! Do you think it'd be useful to add a pytorch padding vs our implementation equivalence test?\r\n\r\n@NielsRogge What do you think should be covered wrt equivalence - is there logic you want to make sure is always aligned between the two? These transformations aren't meant to be a np copy of the torch library so there isn't a 1:1 mapping. ", "@NielsRogge I'm going to merge. If we decide to add the equivalence tests I'll add in a follow up PR. ", "Ok fine, it's just that I was working on a model (#19784) that leverages torch.nn.functional.pad as seen [here](https://github.com/mv-lab/swin2sr/blob/7eeebfba849bbc934ea254ec4cfa8e9d6fc0672c/models/network_swin2sr.py#L891). So it'd be nice to check equivalence between PyTorch and our NumPy implementation." ]
1,666
1,668
1,668
COLLABORATOR
null
# What does this PR do? Adds padding to the image transforms library. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19838", "html_url": "https://github.com/huggingface/transformers/pull/19838", "diff_url": "https://github.com/huggingface/transformers/pull/19838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19838.patch", "merged_at": 1668770841000 }
https://api.github.com/repos/huggingface/transformers/issues/19837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19837/comments
https://api.github.com/repos/huggingface/transformers/issues/19837/events
https://github.com/huggingface/transformers/pull/19837
1,420,587,456
PR_kwDOCUB6oc5BZt54
19,837
Fix nightly CircleCI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I accidentally push `GitPython` to `main`\r\n\r\nhttps://github.com/huggingface/transformers/commit/6f8064da6b0c8f003731f292acc64f281a2aea65\r\n\r\n😨 ", "Ah so this is done, perfect!" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Fix a few nightly CircleCI issues The effect could be found [here](https://app.circleci.com/pipelines/github/huggingface/transformers/50144/workflows/aaa65460-bbb5-4048-af3e-9450af06e231).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19837/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19837", "html_url": "https://github.com/huggingface/transformers/pull/19837", "diff_url": "https://github.com/huggingface/transformers/pull/19837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19837.patch", "merged_at": 1666620002000 }
https://api.github.com/repos/huggingface/transformers/issues/19836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19836/comments
https://api.github.com/repos/huggingface/transformers/issues/19836/events
https://github.com/huggingface/transformers/issues/19836
1,420,583,265
I_kwDOCUB6oc5UrGFh
19,836
Save model using save_pretrainedmethod.
{ "login": "Iron-man-0", "id": 116186408, "node_id": "U_kgDOBuzdKA", "avatar_url": "https://avatars.githubusercontent.com/u/116186408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iron-man-0", "html_url": "https://github.com/Iron-man-0", "followers_url": "https://api.github.com/users/Iron-man-0/followers", "following_url": "https://api.github.com/users/Iron-man-0/following{/other_user}", "gists_url": "https://api.github.com/users/Iron-man-0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iron-man-0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iron-man-0/subscriptions", "organizations_url": "https://api.github.com/users/Iron-man-0/orgs", "repos_url": "https://api.github.com/users/Iron-man-0/repos", "events_url": "https://api.github.com/users/Iron-man-0/events{/privacy}", "received_events_url": "https://api.github.com/users/Iron-man-0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) to get help debugging your code :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
NONE
null
I trained the DeTr model on custom dataset using this tutorial. [Niels Rogge Fine_tuning_DetrForObjectDetection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) I was trying to save that model using save_pretrained method, but I'm getting a error. Can someone please help me on how to save a model and load the same for inference using save_pretrained and from_pretrained methods. Thank you so much.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19835/comments
https://api.github.com/repos/huggingface/transformers/issues/19835/events
https://github.com/huggingface/transformers/pull/19835
1,420,409,194
PR_kwDOCUB6oc5BZID8
19,835
Display the number of trainable parameters in Trainer when lauching a training
{ "login": "regisss", "id": 15324346, "node_id": "MDQ6VXNlcjE1MzI0MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regisss", "html_url": "https://github.com/regisss", "followers_url": "https://api.github.com/users/regisss/followers", "following_url": "https://api.github.com/users/regisss/following{/other_user}", "gists_url": "https://api.github.com/users/regisss/gists{/gist_id}", "starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regisss/subscriptions", "organizations_url": "https://api.github.com/users/regisss/orgs", "repos_url": "https://api.github.com/users/regisss/repos", "events_url": "https://api.github.com/users/regisss/events{/privacy}", "received_events_url": "https://api.github.com/users/regisss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> When launching a training with an instance of the `Trainer` class, a small recap is displayed with various pieces of information (total train batch size, total optimization steps, etc...). This PR adds to this recap the number of trainable parameters of the model because this is not always mentioned in the model card or in the documentation and I think it is a valuable figure to have in all runs. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19835/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19835", "html_url": "https://github.com/huggingface/transformers/pull/19835", "diff_url": "https://github.com/huggingface/transformers/pull/19835.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19835.patch", "merged_at": 1666617352000 }
https://api.github.com/repos/huggingface/transformers/issues/19834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19834/comments
https://api.github.com/repos/huggingface/transformers/issues/19834/events
https://github.com/huggingface/transformers/issues/19834
1,420,376,683
I_kwDOCUB6oc5UqTpr
19,834
support loading pretrained model from fsspec paths
{ "login": "leoleoasd", "id": 37735580, "node_id": "MDQ6VXNlcjM3NzM1NTgw", "avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leoleoasd", "html_url": "https://github.com/leoleoasd", "followers_url": "https://api.github.com/users/leoleoasd/followers", "following_url": "https://api.github.com/users/leoleoasd/following{/other_user}", "gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}", "starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions", "organizations_url": "https://api.github.com/users/leoleoasd/orgs", "repos_url": "https://api.github.com/users/leoleoasd/repos", "events_url": "https://api.github.com/users/leoleoasd/events{/privacy}", "received_events_url": "https://api.github.com/users/leoleoasd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you, but we're not interested in expanding support to other tools than the Hugging Face Hub :-)", "I'd like to use Hub too, but our cluster have no public internet access...", "> I'd like to use Hub too, but our cluster have no public internet access...\r\n\r\nhai, can you help how to load the pretrain model from hdfs path", "Still no way to do that. I have to use pytorch_lightning's ModelCheckpoint callback to load & save.\r\nYou can pre-process your pretrained model into a pytorch `.pt` file then load it with `torch.load(fsspec.open('xxx'))`.", "> \r\n\r\nthanks so much for your reply , can you teach more detail please ,thanks", "hha, I have got an simple solution without changing the source code , just use spark submit :\r\n--conf spark.yarn.dist.archives=hdfs://your_own_hdfs_path/your_pretraining_model.zip#your_pretraining_model\r\n \r\n\r\n", "@NYcleaner thanks for this idea! How would you be loading (referencing) the model then inside the spark job? , as a hdfs://your_own... string ?", "> @NYcleaner thanks for this idea! How would you be loading (referencing) the model then inside the spark job? , as a hdfs://your_own... string ?\r\n\r\nIn case you have not yet figured out. When you provide `--conf spark.yarn.dist.archives=hdfs://your_own_hdfs_path/your_pretraining_model.zip#your_pretraining_model` the archive is downloaded extracted and symlinked as `your_training_model` in the current directory. When passing the path you should be able to do just `./your_pretraining_model`.", "That’s amazing - thanks for pointing it out!\r\n\r\n> Am 19.10.2023 um 20:03 schrieb Pavan Lanka ***@***.***>:\r\n> \r\n> \r\n> @NYcleaner thanks for this idea! How would you be loading (referencing) the model then inside the spark job? , as a hdfs://your_own... string ?\r\n> \r\n> In case you have not yet figured out. When you provide --conf spark.yarn.dist.archives=hdfs://your_own_hdfs_path/your_pretraining_model.zip#your_pretraining_model the archive is downloaded extracted and symlinked as your_training_model in the current directory. When passing the path you should be able to do just ./your_pretraining_model.\r\n> \r\n> —\r\n> Reply to this email directly, view it on GitHub, or unsubscribe.\r\n> You are receiving this because you commented.\r\n" ]
1,666
1,697
1,666
NONE
null
### Feature request Make `from_pretrained` works with fsspec paths. ``` tokenizer = BartTokenizerFast.from_pretrained("hdfs://..../bert-base-uncased") ``` ### Motivation To make transformers suitable for usage in the cloud or in clusters where files should be stored in HDFS or S3. ### Your contribution I'll be willing to submit a PR but `from_pretrained` is a very complicated function so I may need assistance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19834/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19833/comments
https://api.github.com/repos/huggingface/transformers/issues/19833/events
https://github.com/huggingface/transformers/issues/19833
1,420,229,654
I_kwDOCUB6oc5UpvwW
19,833
Position embedding in the DETR model
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @NielsRogge , could you explain how to solve this issue? You just put the Goo first issue label on it but it's not clear what a contributor would have to do to fix it.", "Hi @NielsRogge, I would like to take on this.\r\nAs Sylvain suggested, could you offer some context on how to go with this?\r\nThanks :)", "Yeah I marked this as good first issue as someone could take a deeper dive into DETR's position embeddings.\r\n\r\nReading the [paper](https://arxiv.org/abs/2005.12872) for that could definitely be helpful. But the implementation is correct, it's probably internal variables/docstrings that need to be updated. From the paper:\r\n\r\n> Since the decoder is also permutation-invariant, the N input embeddings must be different to produce different results. These input embeddings are learnt positional encodings that we refer to as object queries, and similarly to the encoder, we add them to the input of each attention layer.\r\n\r\nSo the `position_embeddings` argument of the cross-attention layer are exactly these input embeddings, often also called \"content embeddings\" or \"object queries\".\r\n\r\nThen a bit later on in the paper they state:\r\n\r\n> There are two kinds of positional encodings in our model: spatial positional encodings and output positional encodings (object queries).\r\n\r\nSo the `key_value_position_embeddings` arguments of the cross-attention layer refer to these spatial position encodings. These are added to the keys and values in the cross-attention operation. \r\n\r\nSo we could for clarity update the \"position_embeddings\" argument to \"object_queries\", and the \"key_value_position_embeddings\" argument to \"spatial_position_embeddings\"", "Hello @daspartho @NielsRogge , wanted to inquire as to whether any progress was made on this? I'd like to take a look.", "Hello @NielsRogge , I am currently working on this issue. I've read the article and I do understand what has to be changed. My question is if we only have to change the `DetrDecoderLayer` class (in the respective `forward` function mentioned above or al position_embeddings args have to change too. \r\n\r\nI did some local tests too, and noted that changing only in the function forward i mentioned to `object_queries` and `spatial_position_embeddings`, many tests broke because of wrong arguments passed since names changed. In order to change these arguments, we need to change them in tests? \r\n\r\nI looked up some tests, but I do think the problem is in the code itself, since classes related to that one would be passing arguments wrongly. \r\n\r\nThis is my first contribution to an open source project this size, and I'm really happy to do it. Thanks in advance.", "Hey @NielsRogge is this issue still open? If yes can I take this?", "Hey @hackpk I'm finishing touches in my PR to fix this Issue, so Idk...", "That's great.I'll look for another issue then. Thanks. ", "No problem, good luck :D", "@NielsRogge @amyeroberts I think this can be closed due to #24652 " ]
1,666
1,693
1,693
NONE
null
### System Info According to the argument definition of the `DetrDecoderLayer.forward()` specified here: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L723-L728 The `positional_embeddings` argument for the cross-attention should be assigned by the `position_embeddings` variable instead of `query_position_embeddings `. https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L757-L764 Is this an error in the argument definition or the code part? Thank you! ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It is from the transformers code. Arguments definition: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L723-L728 Cross-attention code: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L757-L764 ### Expected behavior Either: 1. The `positional_embeddings` argument for the cross-attention should be assigned by the `position_embeddings` variable instead of `query_position_embeddings `, or 2. Update the documentation of the argument to the correct one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19833/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19832/comments
https://api.github.com/repos/huggingface/transformers/issues/19832/events
https://github.com/huggingface/transformers/issues/19832
1,420,150,794
I_kwDOCUB6oc5UpcgK
19,832
Can we add optional kwargs to various model in addition to their required fixed inputs?
{ "login": "LTEnjoy", "id": 52776915, "node_id": "MDQ6VXNlcjUyNzc2OTE1", "avatar_url": "https://avatars.githubusercontent.com/u/52776915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LTEnjoy", "html_url": "https://github.com/LTEnjoy", "followers_url": "https://api.github.com/users/LTEnjoy/followers", "following_url": "https://api.github.com/users/LTEnjoy/following{/other_user}", "gists_url": "https://api.github.com/users/LTEnjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/LTEnjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LTEnjoy/subscriptions", "organizations_url": "https://api.github.com/users/LTEnjoy/orgs", "repos_url": "https://api.github.com/users/LTEnjoy/repos", "events_url": "https://api.github.com/users/LTEnjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/LTEnjoy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We can't add blank kwargs like this, as a user who makes a typo in their inputs will then not get any error and not realize they did something wrong.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Yeah, now I see the hidden trouble if we add blank kwargs to a model. But what about adding some arguments that are specified by users?\r\n\r\nFor example, we add an argument \"custom_arg=None\" to __init__(```, custom_arg=None), and then the model will add the argument to its forward function like \"self.forward = partial(self.forward, custom_arg=None)\", and finally this argument is iteratively passed to all the submodel.\r\n\r\nMaybe in this way we could keep the input safety for unaware mistakes but increase a model's flexibility?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,672
1,672
NONE
null
### Feature request For example in class BertForMaskedLM, we have to input needed arguments as follows: def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): I wonder if it's possible to add optional **\*\*kwargs** so that we could customize model layer easier. Like this: def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs, ): ### Motivation When I want to add an additional operation to a sub module in a big whole model, I need to use extra data as input. But to implement this, I have to add the extra inputs to every module so they can be finally transferred to the correct module. I don't know whether there are more efficient ways to solve the problem. If so, I'd be appriciated for you to point it out. ### Your contribution Maybe None
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19831/comments
https://api.github.com/repos/huggingface/transformers/issues/19831/events
https://github.com/huggingface/transformers/pull/19831
1,419,857,466
PR_kwDOCUB6oc5BXS7e
19,831
[WIP] Donut flax implementation
{ "login": "amankhandelia", "id": 7098967, "node_id": "MDQ6VXNlcjcwOTg5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amankhandelia", "html_url": "https://github.com/amankhandelia", "followers_url": "https://api.github.com/users/amankhandelia/followers", "following_url": "https://api.github.com/users/amankhandelia/following{/other_user}", "gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}", "starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions", "organizations_url": "https://api.github.com/users/amankhandelia/orgs", "repos_url": "https://api.github.com/users/amankhandelia/repos", "events_url": "https://api.github.com/users/amankhandelia/events{/privacy}", "received_events_url": "https://api.github.com/users/amankhandelia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19831). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
NONE
null
This PR adds Jax support for [Donut Model](https://huggingface.co/docs/transformers/model_doc/donut). This work is very much in progress, I need to add documentation and I am still not sure if I have added adequate number of test for the changes I have made so far, so it would be great, if someone can look up and may be provide some feedback in terms of quality of code and general direction Things which are done - Code is functional and passes integration test with existing Donut models in HF hub Things which are not done - Update documentation - Clear list of TODOs Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19831/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19831/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19831", "html_url": "https://github.com/huggingface/transformers/pull/19831", "diff_url": "https://github.com/huggingface/transformers/pull/19831.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19831.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19830/comments
https://api.github.com/repos/huggingface/transformers/issues/19830/events
https://github.com/huggingface/transformers/pull/19830
1,419,801,894
PR_kwDOCUB6oc5BXIH7
19,830
tictac game
{ "login": "Dila-wa", "id": 71760760, "node_id": "MDQ6VXNlcjcxNzYwNzYw", "avatar_url": "https://avatars.githubusercontent.com/u/71760760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dila-wa", "html_url": "https://github.com/Dila-wa", "followers_url": "https://api.github.com/users/Dila-wa/followers", "following_url": "https://api.github.com/users/Dila-wa/following{/other_user}", "gists_url": "https://api.github.com/users/Dila-wa/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dila-wa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dila-wa/subscriptions", "organizations_url": "https://api.github.com/users/Dila-wa/orgs", "repos_url": "https://api.github.com/users/Dila-wa/repos", "events_url": "https://api.github.com/users/Dila-wa/events{/privacy}", "received_events_url": "https://api.github.com/users/Dila-wa/received_events", "type": "User", "site_admin": false }
[ { "id": 4720676470, "node_id": "LA_kwDOCUB6oc8AAAABGV_Odg", "url": "https://api.github.com/repos/huggingface/transformers/labels/spam", "name": "spam", "color": "fbca04", "default": false, "description": "Hacktoberfest spam" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I don't see the link with Transformers." ]
1,666
1,666
1,666
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19830/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19830", "html_url": "https://github.com/huggingface/transformers/pull/19830", "diff_url": "https://github.com/huggingface/transformers/pull/19830.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19830.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19829/comments
https://api.github.com/repos/huggingface/transformers/issues/19829/events
https://github.com/huggingface/transformers/pull/19829
1,419,795,632
PR_kwDOCUB6oc5BXG7S
19,829
Improve check copies
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19829). All of your documentation changes will be reflected on that endpoint." ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> print first diff line intead of first code part line ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19829", "html_url": "https://github.com/huggingface/transformers/pull/19829", "diff_url": "https://github.com/huggingface/transformers/pull/19829.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19829.patch", "merged_at": 1666625058000 }
https://api.github.com/repos/huggingface/transformers/issues/19828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19828/comments
https://api.github.com/repos/huggingface/transformers/issues/19828/events
https://github.com/huggingface/transformers/pull/19828
1,419,790,710
PR_kwDOCUB6oc5BXF9p
19,828
simplify dpt copying
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for your PR, but we don't accept renaming layers like this as it's a breaking change.\r\n\r\nWhy? It's even not change `DPTPreTrainedModel`, so probabiity that some one use inner layers not so big." ]
1,666
1,667
1,667
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19828/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19828", "html_url": "https://github.com/huggingface/transformers/pull/19828", "diff_url": "https://github.com/huggingface/transformers/pull/19828.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19828.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19827/comments
https://api.github.com/repos/huggingface/transformers/issues/19827/events
https://github.com/huggingface/transformers/pull/19827
1,419,790,050
PR_kwDOCUB6oc5BXF1e
19,827
run newer black
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19827). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR. We won't switch black versions until the end of the year (as was mentioned in previous PRs like this one) when we switch to the 2023 version, as it makes every standing PR conflict with main.", "> Thanks for your PR. We won't switch black versions until the end of the year (as was mentioned in previous PRs like this one) when we switch to the 2023 version, as it makes every standing PR conflict with main.\r\n\r\nI think it's not big problem here, diff not so big so probability that someone changes something in that lines not so big. Also it's produce too simple merge conflicts.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Run newer black. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19827/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19827", "html_url": "https://github.com/huggingface/transformers/pull/19827", "diff_url": "https://github.com/huggingface/transformers/pull/19827.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19827.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19826/comments
https://api.github.com/repos/huggingface/transformers/issues/19826/events
https://github.com/huggingface/transformers/pull/19826
1,419,761,391
PR_kwDOCUB6oc5BXAZ_
19,826
fix bart compatibility with numpy tensors
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19826). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR, but PyTorch models do not accept NumPy arrays as inputs, and your changes won't make any difference for that I believe.", "> but PyTorch models do not accept NumPy arrays as inputs, and your changes won't make any difference for that I believe.\r\n\r\nYep, it's true. But in most of this cases would be produced more clear error than `'int' object is not callable` caused by `size` property of numpy array", "@sgugger, ping", "I think I have been pretty clear on why we don't want this change. PyTorch models do not support NumPy arrays and we are not interested in replacing all the `size()` by `shape`.", "> I think I have been pretty clear on why we don't want this change. PyTorch models do not support NumPy arrays and we are not interested in replacing all the `size()` by `shape`.\n\nYep, I understand it. This PR only about more clear errors on numpy tensors. Nothing else. So I think it's make little easier to debug.", "@sgugger, ping", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Changes only on `modeling_bart.py` file. Other changes just by copying mechanics. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19826", "html_url": "https://github.com/huggingface/transformers/pull/19826", "diff_url": "https://github.com/huggingface/transformers/pull/19826.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19826.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19825/comments
https://api.github.com/repos/huggingface/transformers/issues/19825/events
https://github.com/huggingface/transformers/issues/19825
1,419,720,323
I_kwDOCUB6oc5UnzaD
19,825
ImportError: cannot import name 'PegasusTokenizer' from 'transformers'
{ "login": "Sonali234", "id": 62796305, "node_id": "MDQ6VXNlcjYyNzk2MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/62796305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sonali234", "html_url": "https://github.com/Sonali234", "followers_url": "https://api.github.com/users/Sonali234/followers", "following_url": "https://api.github.com/users/Sonali234/following{/other_user}", "gists_url": "https://api.github.com/users/Sonali234/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sonali234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sonali234/subscriptions", "organizations_url": "https://api.github.com/users/Sonali234/orgs", "repos_url": "https://api.github.com/users/Sonali234/repos", "events_url": "https://api.github.com/users/Sonali234/events{/privacy}", "received_events_url": "https://api.github.com/users/Sonali234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you check transformers version and update it, if too old?\r\n\r\nIt's work fine for me:\r\n```\r\nIn [1]: from transformers import PegasusTokenizer\r\n\r\nIn [2]: import transformers\r\n\r\nIn [3]: transformers.__version__\r\nOut[3]: '4.23.1'\r\n```", "Gently pinging @ArthurZucker here", "Hey, as mentioned you are probably using an old version of transformers. I am not able to reproduce this bug. You should use `transformers>=3.1` as `Pegasus` was first introduced in the [release 3.1](https://github.com/huggingface/transformers/releases?q=PegasusForConditionalGeneration&expanded=true)", "Did you find any solution for this problem?\r\n", "Yes, as mentioned in my previous answer, updating `transformers` 🤗 ", "> Yes, as mentioned in my previous answer, updating `transformers` hugs\r\n\r\nThis worked for me. Thanks!", "Thank you. It seems to work for me too!" ]
1,666
1,668
1,667
NONE
null
@patrickvonplaten Tried: from transformers import PegasusTokenizer import torch Output: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_7943/3816365261.py in <cell line: 1>() ----> 1 from transformers import PegasusTokenizer 2 import torch ImportError: cannot import name 'PegasusTokenizer' from 'transformers'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19825/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19824/comments
https://api.github.com/repos/huggingface/transformers/issues/19824/events
https://github.com/huggingface/transformers/pull/19824
1,419,650,062
PR_kwDOCUB6oc5BWqUl
19,824
Added translation of converting_tensorflow_models.mdx to Portuguese Issue #16824
{ "login": "davialvb", "id": 34287081, "node_id": "MDQ6VXNlcjM0Mjg3MDgx", "avatar_url": "https://avatars.githubusercontent.com/u/34287081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davialvb", "html_url": "https://github.com/davialvb", "followers_url": "https://api.github.com/users/davialvb/followers", "following_url": "https://api.github.com/users/davialvb/following{/other_user}", "gists_url": "https://api.github.com/users/davialvb/gists{/gist_id}", "starred_url": "https://api.github.com/users/davialvb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davialvb/subscriptions", "organizations_url": "https://api.github.com/users/davialvb/orgs", "repos_url": "https://api.github.com/users/davialvb/repos", "events_url": "https://api.github.com/users/davialvb/events{/privacy}", "received_events_url": "https://api.github.com/users/davialvb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16824 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19824", "html_url": "https://github.com/huggingface/transformers/pull/19824", "diff_url": "https://github.com/huggingface/transformers/pull/19824.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19824.patch", "merged_at": 1666619416000 }
https://api.github.com/repos/huggingface/transformers/issues/19822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19822/comments
https://api.github.com/repos/huggingface/transformers/issues/19822/events
https://github.com/huggingface/transformers/issues/19822
1,419,614,808
I_kwDOCUB6oc5UnZpY
19,822
data_collator.py:131: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow
{ "login": "FlorinAndrei", "id": 901867, "node_id": "MDQ6VXNlcjkwMTg2Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/901867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FlorinAndrei", "html_url": "https://github.com/FlorinAndrei", "followers_url": "https://api.github.com/users/FlorinAndrei/followers", "following_url": "https://api.github.com/users/FlorinAndrei/following{/other_user}", "gists_url": "https://api.github.com/users/FlorinAndrei/gists{/gist_id}", "starred_url": "https://api.github.com/users/FlorinAndrei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FlorinAndrei/subscriptions", "organizations_url": "https://api.github.com/users/FlorinAndrei/orgs", "repos_url": "https://api.github.com/users/FlorinAndrei/repos", "events_url": "https://api.github.com/users/FlorinAndrei/events{/privacy}", "received_events_url": "https://api.github.com/users/FlorinAndrei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report, the PR linked above should fix it." ]
1,666
1,666
1,666
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1gK9iXBiIEmmt2OMq6wmxwcpslkchHJh-?usp=sharing See the output of `trainer.train()` ### Expected behavior The warning should not occur.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19822/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19821/comments
https://api.github.com/repos/huggingface/transformers/issues/19821/events
https://github.com/huggingface/transformers/pull/19821
1,419,597,699
PR_kwDOCUB6oc5BWgCU
19,821
Spanish translation of multiple_choice.mdx, question_answering.mdx.
{ "login": "alceballosa", "id": 23227057, "node_id": "MDQ6VXNlcjIzMjI3MDU3", "avatar_url": "https://avatars.githubusercontent.com/u/23227057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alceballosa", "html_url": "https://github.com/alceballosa", "followers_url": "https://api.github.com/users/alceballosa/followers", "following_url": "https://api.github.com/users/alceballosa/following{/other_user}", "gists_url": "https://api.github.com/users/alceballosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/alceballosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alceballosa/subscriptions", "organizations_url": "https://api.github.com/users/alceballosa/orgs", "repos_url": "https://api.github.com/users/alceballosa/repos", "events_url": "https://api.github.com/users/alceballosa/events{/privacy}", "received_events_url": "https://api.github.com/users/alceballosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @osanseviero, I just updated all files to reflect your suggestions :) ", "Thanks a lot! :fire: ", "@sgugger thanks for the merge! However, shouldn't the original issue (#15947) addressing the translations remain open? Think it was closed automatically bc I referenced it here.", "Yes, that's because you used the word \"Fixes\". You should have said something like \"Related to ...\" :-)", "Oops, sorry! Lesson learned :')" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Translates `multiple_choice.mdx` and `question_answering.mdx` into Spanish. Also updates the `_toctree.yml` file accordingly. Fixes #15947 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? @osanseviero @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19821/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19821", "html_url": "https://github.com/huggingface/transformers/pull/19821", "diff_url": "https://github.com/huggingface/transformers/pull/19821.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19821.patch", "merged_at": 1666656695000 }
https://api.github.com/repos/huggingface/transformers/issues/19820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19820/comments
https://api.github.com/repos/huggingface/transformers/issues/19820/events
https://github.com/huggingface/transformers/pull/19820
1,419,562,039
PR_kwDOCUB6oc5BWYr-
19,820
fix broken links in testing.mdx
{ "login": "xffxff", "id": 30254428, "node_id": "MDQ6VXNlcjMwMjU0NDI4", "avatar_url": "https://avatars.githubusercontent.com/u/30254428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xffxff", "html_url": "https://github.com/xffxff", "followers_url": "https://api.github.com/users/xffxff/followers", "following_url": "https://api.github.com/users/xffxff/following{/other_user}", "gists_url": "https://api.github.com/users/xffxff/gists{/gist_id}", "starred_url": "https://api.github.com/users/xffxff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xffxff/subscriptions", "organizations_url": "https://api.github.com/users/xffxff/orgs", "repos_url": "https://api.github.com/users/xffxff/repos", "events_url": "https://api.github.com/users/xffxff/events{/privacy}", "received_events_url": "https://api.github.com/users/xffxff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? fix a broken link in https://huggingface.co/docs/transformers/main/en/testing I also found the link I marked out in the picture below is also broken, but I don't know how to fix it. You can go to "How transformers are tested" from [here](https://huggingface.co/docs/transformers/main/en/testing#how-transformers-are-tested) ![image](https://user-images.githubusercontent.com/30254428/197367017-d0aa691d-54cf-48f9-a4f4-89ec693396d6.png) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19820", "html_url": "https://github.com/huggingface/transformers/pull/19820", "diff_url": "https://github.com/huggingface/transformers/pull/19820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19820.patch", "merged_at": 1666619283000 }
https://api.github.com/repos/huggingface/transformers/issues/19819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19819/comments
https://api.github.com/repos/huggingface/transformers/issues/19819/events
https://github.com/huggingface/transformers/pull/19819
1,419,526,493
PR_kwDOCUB6oc5BWRZj
19,819
fix vision enc-dec models conversion to onnx
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19819). All of your documentation changes will be reflected on that endpoint.", "cc @lewtun ", "Hello @kventinel , thanks for the PR. I am currently looking for the same thing for Whisper model and was going to update the VisionEncoderDecoderModel soon. The better way would be to run the full model by passing in the `encoder_outputs`. It gives you decoder + other parts but skips the encoder. Hence, it would be more generic than creating separate functions.\r\n\r\nYou can follow up on the PR's [19525](https://github.com/huggingface/transformers/pull/19525) and [420](https://github.com/huggingface/optimum/pull/420/files#diff-c27ea812737bc6ccfe34f92a4ff0d1ec473a41b8c8012bfdb08bb22a46104ddeR321) for the changes for adding the encoder_outputs. You could add the encoder_outputs export for the model in a similar manner after the PR 19525 is merged.", "Hi @kventinel , since the PR [19525](https://github.com/huggingface/transformers/pull/19525) is merged would you like to update the model config to use `encoder_outputs`? Let me know if you have any questions.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19811 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19819/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19819", "html_url": "https://github.com/huggingface/transformers/pull/19819", "diff_url": "https://github.com/huggingface/transformers/pull/19819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19819.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19818/comments
https://api.github.com/repos/huggingface/transformers/issues/19818/events
https://github.com/huggingface/transformers/pull/19818
1,419,514,867
PR_kwDOCUB6oc5BWPOK
19,818
Type hints
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure why tests are failing", "Hi @IMvision12, I think tests are failing because in some cases you overwrote the default argument values! It's easy to see if you look in the [files changed interface](https://github.com/huggingface/transformers/pull/19818/files). If you set those argument values back to their original defaults then tests should pass.\r\n\r\n![image](https://user-images.githubusercontent.com/12866554/197528126-065ac6b4-e277-4c0b-8648-177447dca744.png)\r\n\r\n", "Okay I will change that. ", "@Rocketknight1 Tests are still failing", "_The documentation is not available anymore as the PR was closed or merged._", "@Rocketknight1 Done!!" ]
1,666
1,668
1,667
CONTRIBUTOR
null
# What does this PR do? Type-hints for `realm`, `Speech2Text2`, `SpeechToText` and `speech-encoder-decoder` @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19818", "html_url": "https://github.com/huggingface/transformers/pull/19818", "diff_url": "https://github.com/huggingface/transformers/pull/19818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19818.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19817/comments
https://api.github.com/repos/huggingface/transformers/issues/19817/events
https://github.com/huggingface/transformers/pull/19817
1,419,501,116
PR_kwDOCUB6oc5BWMWg
19,817
[Doctest] Add configuration_maskformer.py
{ "login": "sha016", "id": 92833633, "node_id": "U_kgDOBYiHYQ", "avatar_url": "https://avatars.githubusercontent.com/u/92833633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sha016", "html_url": "https://github.com/sha016", "followers_url": "https://api.github.com/users/sha016/followers", "following_url": "https://api.github.com/users/sha016/following{/other_user}", "gists_url": "https://api.github.com/users/sha016/gists{/gist_id}", "starred_url": "https://api.github.com/users/sha016/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sha016/subscriptions", "organizations_url": "https://api.github.com/users/sha016/orgs", "repos_url": "https://api.github.com/users/sha016/repos", "events_url": "https://api.github.com/users/sha016/events{/privacy}", "received_events_url": "https://api.github.com/users/sha016/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Adds `configuration_maskformer.py` to `utils/documentation_tests.txt` Based on #19487 @ydshieh can you please review? thanks :) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19817/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19817", "html_url": "https://github.com/huggingface/transformers/pull/19817", "diff_url": "https://github.com/huggingface/transformers/pull/19817.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19817.patch", "merged_at": 1666602512000 }
https://api.github.com/repos/huggingface/transformers/issues/19816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19816/comments
https://api.github.com/repos/huggingface/transformers/issues/19816/events
https://github.com/huggingface/transformers/pull/19816
1,419,447,254
PR_kwDOCUB6oc5BWBJB
19,816
Corrected spelling errors in README_ko.md
{ "login": "yusha-g", "id": 110189579, "node_id": "U_kgDOBpFcCw", "avatar_url": "https://avatars.githubusercontent.com/u/110189579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yusha-g", "html_url": "https://github.com/yusha-g", "followers_url": "https://api.github.com/users/yusha-g/followers", "following_url": "https://api.github.com/users/yusha-g/following{/other_user}", "gists_url": "https://api.github.com/users/yusha-g/gists{/gist_id}", "starred_url": "https://api.github.com/users/yusha-g/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yusha-g/subscriptions", "organizations_url": "https://api.github.com/users/yusha-g/orgs", "repos_url": "https://api.github.com/users/yusha-g/repos", "events_url": "https://api.github.com/users/yusha-g/events{/privacy}", "received_events_url": "https://api.github.com/users/yusha-g/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19816). All of your documentation changes will be reflected on that endpoint.", "@eunseojo could you give a quick look?" ]
1,666
1,667
1,667
NONE
null
1. 독립적이여서 to 독립적 이어서 2. 마스킹된 to 마스킹 된 3. 파이썬 to 파이선 # What does this PR do? Fixed typos and spelling mistakes <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19816", "html_url": "https://github.com/huggingface/transformers/pull/19816", "diff_url": "https://github.com/huggingface/transformers/pull/19816.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19816.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19815/comments
https://api.github.com/repos/huggingface/transformers/issues/19815/events
https://github.com/huggingface/transformers/issues/19815
1,419,406,818
I_kwDOCUB6oc5Umm3i
19,815
Faster TrOCR (in particular) and faster batch text generation (in general)
{ "login": "IlyasMoutawwakil", "id": 57442720, "node_id": "MDQ6VXNlcjU3NDQyNzIw", "avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IlyasMoutawwakil", "html_url": "https://github.com/IlyasMoutawwakil", "followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers", "following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}", "gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}", "starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions", "organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs", "repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos", "events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}", "received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mht-sharma", "id": 21088122, "node_id": "MDQ6VXNlcjIxMDg4MTIy", "avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mht-sharma", "html_url": "https://github.com/mht-sharma", "followers_url": "https://api.github.com/users/mht-sharma/followers", "following_url": "https://api.github.com/users/mht-sharma/following{/other_user}", "gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions", "organizations_url": "https://api.github.com/users/mht-sharma/orgs", "repos_url": "https://api.github.com/users/mht-sharma/repos", "events_url": "https://api.github.com/users/mht-sharma/events{/privacy}", "received_events_url": "https://api.github.com/users/mht-sharma/received_events", "type": "User", "site_admin": false }
[ { "login": "mht-sharma", "id": 21088122, "node_id": "MDQ6VXNlcjIxMDg4MTIy", "avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mht-sharma", "html_url": "https://github.com/mht-sharma", "followers_url": "https://api.github.com/users/mht-sharma/followers", "following_url": "https://api.github.com/users/mht-sharma/following{/other_user}", "gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions", "organizations_url": "https://api.github.com/users/mht-sharma/orgs", "repos_url": "https://api.github.com/users/mht-sharma/repos", "events_url": "https://api.github.com/users/mht-sharma/events{/privacy}", "received_events_url": "https://api.github.com/users/mht-sharma/received_events", "type": "User", "site_admin": false }, { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
[ "cc @NielsRogge ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hey @IlyasMoutawwakil is there any way we can get the confidence scores of the inferences? " ]
1,666
1,695
1,669
MEMBER
null
### Feature request Text generation in auto-regressive decoders can be optimized by not computing next token logits for sentences that already reached `eos` or `pad`. This is a [notebook](https://github.com/IlyasMoutawwakil/Faster-TrOCR/blob/main/Faster_TrOCR_with_ONNX%2BAutoRegressive_Hack.ipynb) where I did my experiments on performances. The last part contains a modified `forward` method of the `VisionEncoderDecoder` class. One important thing to note is that it should only be used in inference (eval) and not training phase. ### Motivation I have been using TrOCR for a while and was trying to make it faster. I tried ONNXizing it but unfortunately that only made it slower (ONNX communication bottleneck ?). I digged deeper in the source code and noticed that when generating text, it computes the next token logits of the whole batch (I had batches of different length text lines because I was using it on extracted text lines from documents) which is not necessary and only makes sense in the case of a batch of a fixed text length. ### Your contribution I made a class where I overrode the `forward` function of the `VisionEncoderDecoder` class and it worked (half the compute time). This is the performance of the native implementation on a 17 text line document: ```python %timeit preprocessor.tokenizer.batch_decode( \ cuda_hf_model.generate( \ pixel_values.to('cuda'), \ max_length=96 \ ), \ skip_special_tokens=True \ ) # 1.68 s ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` This is the performance of the modified implementation: ```python %timeit preprocessor.tokenizer.batch_decode( \ modified_cuda_hf_trocr.generate( \ pixel_values.to('cuda'), \ max_length=96, \ ), \ skip_special_tokens=True) # 894 ms ± 1.21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` I tried it on both CUDA and CPU and the gain is even greater on CPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19815/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19814/comments
https://api.github.com/repos/huggingface/transformers/issues/19814/events
https://github.com/huggingface/transformers/issues/19814
1,419,302,093
I_kwDOCUB6oc5UmNTN
19,814
t5 model's decoder do not use EncDecAttention.key & value in text generation task
{ "login": "CaffreyR", "id": 84232793, "node_id": "MDQ6VXNlcjg0MjMyNzkz", "avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaffreyR", "html_url": "https://github.com/CaffreyR", "followers_url": "https://api.github.com/users/CaffreyR/followers", "following_url": "https://api.github.com/users/CaffreyR/following{/other_user}", "gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions", "organizations_url": "https://api.github.com/users/CaffreyR/orgs", "repos_url": "https://api.github.com/users/CaffreyR/repos", "events_url": "https://api.github.com/users/CaffreyR/events{/privacy}", "received_events_url": "https://api.github.com/users/CaffreyR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @CaffreyR 👋 Our models, when used for generation purposes, have a cache that stores repeated computations (key and value of the attentions) turned on by default. If you'd wish to see the full (redundant) computations being executed, try using the model with `use_cache=False`.", "Great! Thanks for your help!" ]
1,666
1,666
1,666
NONE
null
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.13.0.dev20220709 (False) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten, @Narsil, @gante ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I use `transformers.T5ForConditionalGeneration` to do inference in text generation task(NaturalQuestions). We all know that the longer the answer is the more times the `decoder` goes `forward`. But when I tried to log the forward in each layer. I found that the model goes `encoder` once, `A type decoder` once, `B type decoder` many times based on the answer length. ### Expected behavior `A type decoder` use EncDecAttention. q,k,v,o ![image](https://user-images.githubusercontent.com/84232793/197341985-bf12427d-2323-475c-865a-e3a0a61d7bde.png) But `B type decoder` only use EncDecAttention. q,o ![image](https://user-images.githubusercontent.com/84232793/197341988-d9466cfa-c217-4cc2-ae73-296b50d4c77c.png) Many thanks!! :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19813/comments
https://api.github.com/repos/huggingface/transformers/issues/19813/events
https://github.com/huggingface/transformers/pull/19813
1,419,216,747
PR_kwDOCUB6oc5BVRoX
19,813
Create 'n'numbers_sumfinder.py
{ "login": "jeevan-spec", "id": 73019593, "node_id": "MDQ6VXNlcjczMDE5NTkz", "avatar_url": "https://avatars.githubusercontent.com/u/73019593?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeevan-spec", "html_url": "https://github.com/jeevan-spec", "followers_url": "https://api.github.com/users/jeevan-spec/followers", "following_url": "https://api.github.com/users/jeevan-spec/following{/other_user}", "gists_url": "https://api.github.com/users/jeevan-spec/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeevan-spec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeevan-spec/subscriptions", "organizations_url": "https://api.github.com/users/jeevan-spec/orgs", "repos_url": "https://api.github.com/users/jeevan-spec/repos", "events_url": "https://api.github.com/users/jeevan-spec/events{/privacy}", "received_events_url": "https://api.github.com/users/jeevan-spec/received_events", "type": "User", "site_admin": false }
[ { "id": 4720676470, "node_id": "LA_kwDOCUB6oc8AAAABGV_Odg", "url": "https://api.github.com/repos/huggingface/transformers/labels/spam", "name": "spam", "color": "fbca04", "default": false, "description": "Hacktoberfest spam" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, we do not need this module." ]
1,666
1,666
1,666
NONE
null
For hacktoberfest # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19813/reactions", "total_count": 1, "+1": 0, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19813", "html_url": "https://github.com/huggingface/transformers/pull/19813", "diff_url": "https://github.com/huggingface/transformers/pull/19813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19813.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19812/comments
https://api.github.com/repos/huggingface/transformers/issues/19812/events
https://github.com/huggingface/transformers/pull/19812
1,419,153,644
PR_kwDOCUB6oc5BVEqV
19,812
transformers.data.metrics: replace mention of 🤗 Datasets in DEPRECATION_WARNING with 🤗 Evaluate
{ "login": "angus-lherrou", "id": 55718851, "node_id": "MDQ6VXNlcjU1NzE4ODUx", "avatar_url": "https://avatars.githubusercontent.com/u/55718851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/angus-lherrou", "html_url": "https://github.com/angus-lherrou", "followers_url": "https://api.github.com/users/angus-lherrou/followers", "following_url": "https://api.github.com/users/angus-lherrou/following{/other_user}", "gists_url": "https://api.github.com/users/angus-lherrou/gists{/gist_id}", "starred_url": "https://api.github.com/users/angus-lherrou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/angus-lherrou/subscriptions", "organizations_url": "https://api.github.com/users/angus-lherrou/orgs", "repos_url": "https://api.github.com/users/angus-lherrou/repos", "events_url": "https://api.github.com/users/angus-lherrou/events{/privacy}", "received_events_url": "https://api.github.com/users/angus-lherrou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Not sure what's causing that test failure." ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Changes DEPRECATION_WARNING in `src/transformers/data/metrics/__init__.py` to point to 🤗 Evaluate for metrics functionality instead of 🤗 Datasets, whose metrics functionality has been deprecated and moved to 🤗 Evaluate since the metrics in this file were deprecated in 🤗 Transformers ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19812", "html_url": "https://github.com/huggingface/transformers/pull/19812", "diff_url": "https://github.com/huggingface/transformers/pull/19812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19812.patch", "merged_at": 1666617957000 }
https://api.github.com/repos/huggingface/transformers/issues/19811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19811/comments
https://api.github.com/repos/huggingface/transformers/issues/19811/events
https://github.com/huggingface/transformers/issues/19811
1,419,146,136
I_kwDOCUB6oc5UlnOY
19,811
ONNX conversion from VisionEncoderDecoderModel with different dimensions
{ "login": "entropy2333", "id": 40735723, "node_id": "MDQ6VXNlcjQwNzM1NzIz", "avatar_url": "https://avatars.githubusercontent.com/u/40735723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/entropy2333", "html_url": "https://github.com/entropy2333", "followers_url": "https://api.github.com/users/entropy2333/followers", "following_url": "https://api.github.com/users/entropy2333/following{/other_user}", "gists_url": "https://api.github.com/users/entropy2333/gists{/gist_id}", "starred_url": "https://api.github.com/users/entropy2333/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/entropy2333/subscriptions", "organizations_url": "https://api.github.com/users/entropy2333/orgs", "repos_url": "https://api.github.com/users/entropy2333/repos", "events_url": "https://api.github.com/users/entropy2333/events{/privacy}", "received_events_url": "https://api.github.com/users/entropy2333/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mht-sharma", "id": 21088122, "node_id": "MDQ6VXNlcjIxMDg4MTIy", "avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mht-sharma", "html_url": "https://github.com/mht-sharma", "followers_url": "https://api.github.com/users/mht-sharma/followers", "following_url": "https://api.github.com/users/mht-sharma/following{/other_user}", "gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions", "organizations_url": "https://api.github.com/users/mht-sharma/orgs", "repos_url": "https://api.github.com/users/mht-sharma/repos", "events_url": "https://api.github.com/users/mht-sharma/events{/privacy}", "received_events_url": "https://api.github.com/users/mht-sharma/received_events", "type": "User", "site_admin": false }
[ { "login": "mht-sharma", "id": 21088122, "node_id": "MDQ6VXNlcjIxMDg4MTIy", "avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mht-sharma", "html_url": "https://github.com/mht-sharma", "followers_url": "https://api.github.com/users/mht-sharma/followers", "following_url": "https://api.github.com/users/mht-sharma/following{/other_user}", "gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions", "organizations_url": "https://api.github.com/users/mht-sharma/orgs", "repos_url": "https://api.github.com/users/mht-sharma/repos", "events_url": "https://api.github.com/users/mht-sharma/events{/privacy}", "received_events_url": "https://api.github.com/users/mht-sharma/received_events", "type": "User", "site_admin": false } ]
[ "I'm try to fix your problem in #19819", "Cc @mht-sharma ", "I've come across an issue with the ONNX conversion of TrOCR-base. I'm not sure if they are entirely related, but I've managed to convert the models with huggingface.onnx into actual onnx files. After conversion I obtain an `encoder.onnx` and a `decoder.onnx` file, which seems to be as it should be.\r\n\r\nThen, for processing an input image I do the following:\r\n1. Send image through the processor: \r\n`processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-str\")`\r\n`processor_output = processor(img)`\r\n2. Send output of processor through encoder\r\n`encoder_output = encoder(processor_output )`\r\n3. Send output of encoder through the decoder.\r\n`decoder_output = decoder(trace_tensor)`\r\n\r\nThe conversion happens like this:\r\n```\r\npython -m transformers.onnx --model=microsoft/trocr-base-str\r\n --feature=vision2seq-lm models/onnx\r\n --atol 1e-3\r\n```\r\n\r\nThis yields 2 errors. 1 after running the encoder (the error that occurs breaks the process). A second one occurs with the decoder. For this I have created a tensor that mimics the input shapes of the batch that the decoder expects.\r\n\r\nERROR ENCODER:\r\n```\r\nonnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\\a\\_work\\1\\s\\onnxruntime\\core\\providers\\cpu\\tensor\\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128}\r\n```\r\n\r\nERROR DECODER:\r\n```\r\nFile \"C:\\Users\\Miniconda3\\lib\\site-packages\\onnxruntime\\capi\\onnxruntime_inference_collection.py\", line 196, in run\r\n raise ValueError(\"Model requires {} inputs. Input Feed contains {}\".format(num_required_inputs, num_inputs))\r\nValueError: Model requires 3 inputs. Input Feed contains 1\r\n```\r\n\r\n@NielsRogge @mht-sharma any clues on this?", "> I've come across an issue with the ONNX conversion of TrOCR-base. I'm not sure if they are entirely related, but I've managed to convert the models with huggingface.onnx into actual onnx files. After conversion I obtain an `encoder.onnx` and a `decoder.onnx` file, which seems to be as it should be.\r\n> \r\n> Then, for processing an input image I do the following:\r\n> \r\n> 1. Send image through the processor:\r\n> `processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-str\")`\r\n> `processor_output = processor(img)`\r\n> 2. Send output of processor through encoder\r\n> `encoder_output = encoder(processor_output )`\r\n> 3. Send output of encoder through the decoder.\r\n> `decoder_output = decoder(trace_tensor)`\r\n> \r\n> The conversion happens like this:\r\n> \r\n> ```\r\n> python -m transformers.onnx --model=microsoft/trocr-base-str\r\n> --feature=vision2seq-lm models/onnx\r\n> --atol 1e-3\r\n> ```\r\n> \r\n> This yields 2 errors. 1 after running the encoder (the error that occurs breaks the process). A second one occurs with the decoder. For this I have created a tensor that mimics the input shapes of the batch that the decoder expects.\r\n> \r\n> ERROR ENCODER:\r\n> \r\n> ```\r\n> onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\\a\\_work\\1\\s\\onnxruntime\\core\\providers\\cpu\\tensor\\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128}\r\n> ```\r\n> \r\n> ERROR DECODER:\r\n> \r\n> ```\r\n> File \"C:\\Users\\Miniconda3\\lib\\site-packages\\onnxruntime\\capi\\onnxruntime_inference_collection.py\", line 196, in run\r\n> raise ValueError(\"Model requires {} inputs. Input Feed contains {}\".format(num_required_inputs, num_inputs))\r\n> ValueError: Model requires 3 inputs. Input Feed contains 1\r\n> ```\r\n> \r\n> @NielsRogge @mht-sharma any clues on this?\r\n\r\nHi @Fritskee, the decoder ONNX model expects 3 inputs during inference, namely: `input_ids`, `attention_mask` & `encoder_hidden_states` (output from the encoder model). Hence, the above error. You need to provide the appropriate start token id and attention mask as input to the decoder to start the sequence.\r\n\r\nEncoder error: Please ensure if you have followed appropriate steps to generate the input. For the above model, the following steps worked for me: \r\n\r\n```python\r\n# load image from the IIIT-5k dataset\r\nurl = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png'\r\nimage = Image.open(requests.get(url, stream=True).raw).convert(\"RGB\")\r\npixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n```\r\n\r\nFor more info check: [trocr-base-str](https://huggingface.co/microsoft/trocr-base-str)\r\n\r\nLet me know if there are additional issues with the model.\r\n", "@mht-sharma Thanks for you assistance.\r\nI just tried the suggested code of yours for the encoder part. The issue remains exactly the same.\r\n\r\nTo make sure we're talking about the same thing, I also used the image that you pulled from the web. I'm now running the following code:\r\n```py\r\nimport requests\r\nimport numpy as np\r\nimport onnxruntime as onnxrt\r\nfrom PIL import Image\r\nfrom transformers import TrOCRProcessor\r\nimport config as c\r\n\r\nclass OnnxModel():\r\n def __init__(self, model_path):\r\n self.model = onnxrt.InferenceSession(model_path)\r\n\r\n def __call__(self, img):\r\n onnx_inputs = {self.model.get_inputs()[0].name: np.asarray(img, dtype='float32')}\r\n onnx_output = self.model.run(None, onnx_inputs)[0]\r\n return onnx_output\r\n\r\nif __name__ == \"__main__\":\r\n processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-str\")\r\n encoder = OnnxModel(c.encoder_path)\r\n\r\n url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png'\r\n image = Image.open(requests.get(url, stream=True).raw).convert(\"RGB\")\r\n\r\n pixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n encoder_output = encoder(pixel_values)\r\n```\r\n\r\nRunning this code, gives me exactly the same error. Namely:\r\n`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\\a\\_work\\1\\s\\onnxruntime\\core\\providers\\cpu\\tensor\\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128}`\r\n\r\nI also looked at the link of trocr-base-str that you provided. In there I see that they do `model.generate(pixel_values)`, but this is not possible with ONNX, since the 'InferenceSession' object that is used with onnx has no attribute 'generate'. ", "@Fritskee Thanks for trying the suggestion.\r\n\r\nI have tried your code and it works for me. I would mention the steps I use for exporting and running the inference.\r\n\r\n1. Clone the transformers and install it from source.\r\n2. Export the model\r\n```python\r\npython -m transformers.onnx --model=microsoft/trocr-base-str --feature=vision2seq-lm models_trocr_base --atol 1e-3\r\n```\r\n3. Ran inference using your code. I updated the following line in your code because of version changes.\r\n```python\r\nself.model = onnxrt.InferenceSession(model_path, providers=[\"CPUExecutionProvider\"])\r\n```\r\n\r\nAlso please find the onnx, torch and onnxruntime version I am using.\r\n```bash\r\nonnx==1.12.0\r\nonnxruntime==1.12.1\r\ntorch==1.12.1\r\n```\r\n\r\nFor running inference using the 2 models, as of now, you'll have to roll your own generation loop with onnxruntime. An alternative would be to implement an ORTModelForVisionSeq2Seq in optimum, similar to how Whisper is being implemented: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc\r\n", "@mht-sharma I just created a new conda env, matched your package versions and installed hf/transformers from source. Did the job! I now get a `(1, 577, 768)` tensor at the output of the encoder.\r\n\r\nThanks for your assistance! Will try to figure out the decoder part now!", "@mht-sharma I am facing one final issue with the decoder, which is also a shape issue. \r\n\r\nI'm doing the following to infere with ONNX. I checked the code of [optimum](https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc) as per your suggestion. From this I implemented my callable functionality of the `OnnxDecoder`. I also checked the keys of the `input_names` of the onnx names and all this is correct.\r\n\r\nHowever, at the end I keep on getting an issue with size mismatches. The way I currently understand it, this cannot be solved since the (1, 577, 1024) shape contains the prime number 577. This makes it impossible to find an integer dimension that can match the shape (x, -1, 16, 64). Additionally, the `input_ids` shape is limited to 514 in either dimension. Thus unless I use dimension 1, I cannot create a working dimension. But then using dimension 1 makes the runtime way too long.\r\n\r\nAny clue on this one? \r\n\r\n```py\r\nimport requests\r\nimport numpy as np\r\nimport onnxruntime as onnxrt\r\nfrom PIL import Image\r\nfrom transformers import TrOCRProcessor\r\nimport config as c\r\n\r\nclass OnnxDecoder():\r\n def __init__(self, model_path):\r\n self.model = onnxrt.InferenceSession(model_path, providers=[\"CPUExecutionProvider\"])\r\n self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.model.get_inputs())}\r\n\r\n def __call__(self, input_ids: torch.LongTensor,\r\n encoder_hidden_states: torch.FloatTensor,\r\n attention_mask: torch.LongTensor):\r\n onnx_inputs = {\"input_ids\": input_ids.cpu().detach().numpy()}\r\n\r\n if \"attention_mask\" in self.input_names:\r\n onnx_inputs[\"attention_mask\"] = attention_mask.cpu().detach().numpy()\r\n\r\n if \"encoder_hidden_states\" in self.input_names:\r\n onnx_inputs[\"encoder_hidden_states\"] = encoder_hidden_states.cpu().detach().numpy()\r\n\r\n onnx_output = self.model.run(None, onnx_inputs)\r\n return onnx_output\r\n\r\nif __name__ == \"__main__\":\r\n processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-str\")\r\n encoder = OnnxModel(c.encoder_path)\r\n\r\n url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png'\r\n image = Image.open(requests.get(url, stream=True).raw).convert(\"RGB\")\r\n\r\n pixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n encoder_output = encoder(pixel_values)\r\n decoder_output = decoder(input_ids=torch.LongTensor(np.random.rand(512, 512)),\r\n encoder_hidden_states=torch.FloatTensor(encoder_output),\r\n attention_mask=torch.LongTensor(np.random.rand(512, 512)))\r\n```\r\n\r\nERROR:\r\n`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_623' Status Message: D:\\a\\_work\\1\\s\\onnxruntime\\core\\providers\\cpu\\tensor\\reshape_helper.h:36 onnxruntime::ReshapeHelper::ReshapeHelper size != 0 && (input_shape.Size() % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,577,1024}, requested shape:{512,-1,16,64}`\r\n\r\nI am aware that using tensors with random values will result in a wrong output, but at this moment I'm just trying it to get to run and check the inference speed of the onnx model. ", "Hello @Fritskee the issue is because of the sample input. The `input_ids` and `attention_mask` expects input of size `<batch_size, sequence_length>`. \r\n\r\nIn your above snippet you have created an input of batch size 512, however, the `encoder_hidden_states` are of batch size 1, hence the error. Try creating input with batch size 1 and it should work.\r\n\r\nAdditionally, please use `torch.ones` to generate the `attention_mask` input, with same shape as input_ids.", "> Hello @Fritskee the issue is because of the sample input. The `input_ids` and `attention_mask` expects input of size `<batch_size, sequence_length>`.\r\n> \r\n> In your above snippet you have created an input of batch size 512, however, the `encoder_hidden_states` are of batch size 1, hence the error. Try creating input with batch size 1 and it should work.\r\n> \r\n> Additionally, please use `torch.ones` to generate the `attention_mask` input, with same shape as input_ids.\r\n\r\n@mht-sharma This is indeed the error that I made. Thanks for pointing that out! \r\nI do notice that I am getting better inference times with the Huggingface pytorch model, than with the ONNX model. Which is something I've never encountered. Generally ONNX always outperforms PyTorch for inference. ONNX runs in 2.5 sec, PyTorch runs in 1.8 sec. Both on the same CPU.", "Hello @Fritskee I am able to observe speedup on both cpu and gpu with the model. Could you please share your inference / benchmarking code if possible for testing? ", "> Hello @Fritskee I am able to observe speedup on both cpu and gpu with the model. Could you please share your inference / benchmarking code if possible for testing?\r\n\r\nApologies for the late reply @mht-sharma, I use following code to run inference of the **ONNX model:**\r\n```py\r\nif __name__ == \"__main__\":\r\n image = Image.open(r\"C:\\Users\\local_img.png\").convert(\"RGB\")\r\n\r\n processor = TrOCRProcessor.from_pretrained(\"microsoft/trocr-base-str\")\r\n encoder = OnnxEncoder(c.encoder_path)\r\n decoder = OnnxDecoder(c.decoder_path)\r\n\r\n start = time.time()\r\n pixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n encoder_output = encoder(pixel_values)\r\n decoder_output = decoder(input_ids=torch.LongTensor(np.random.rand(1,384)),\r\n encoder_hidden_states=torch.FloatTensor(encoder_output),\r\n attention_mask=torch.LongTensor(np.ones((1,384))))\r\n end = time.time()\r\n```\r\n\r\nThis code is used to run inference with the **PyTorch/HuggingFace model:**\r\n```py\r\nif __name__ == \"__main__\":\r\n image = Image.open(r\"C:\\Users\\local_img.png\").convert(\"RGB\")\r\n\r\n processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-str')\r\n model = VisionEncoderDecoderModel.from_pretrained(\r\n r\"C:\\Users\\Downloads\\text-model\\text-model\\pytorch_model.bin\",\r\n config=r\"C:\\Users\\Downloads\\text-model\\text-model\\config.json\")\r\n\r\n start = time.time()\r\n pixel_values = processor(images=image, return_tensors=\"pt\").pixel_values\r\n generated_ids = model.generate(pixel_values)\r\n model_output = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n end = time.time()\r\n```\r\n\r\nI got an avg. inference time of **3.339415764808655 seconds** over 100 runs for the **ONNX model**.\r\nSimilarly, I got an avg. inference time of **2.542718324661255** over 100 runs for the **torch/HF model**.\r\nBoth ran on the exact same machine, on CPU, while no other processes were running in the background. ", " Hi @Fritskee , for ORT inference you'll have to roll your own generation loop with ONNX Runtime to run the inference. The above code runs decoder with SL 384 with one forward pass which will give you incorrect results.\r\n\r\nYou can wrap your ORTEncode and ORTDecoder in a ORTModelForVision2Seq\r\n\r\n```python\r\n\r\nclass ORTModelForVision2Seq(VisionEncoderDecoderModel):\r\n def __init__(self, *args, **kwargs):\r\n config = AutoConfig.from_pretrained(model_name)\r\n super().__init__(config)\r\n self._device = \"cpu\"\r\n\r\n self.encoder = ORTEncoder()\r\n self.decoder = ORTDecoder()\r\n\r\n def forward(\r\n self,\r\n pixel_values: Optional[torch.FloatTensor] = None,\r\n decoder_input_ids: Optional[torch.LongTensor] = None,\r\n encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,\r\n **kwargs,\r\n ) -> Seq2SeqLMOutput:\r\n\r\n # Encode if needed : first prediction pass\r\n if encoder_outputs is None:\r\n encoder_outputs = self.encoder(pixel_values=pixel_values)\r\n\r\n # Decode\r\n decoder_attention_mask = decoder_input_ids.new_ones(decoder_input_ids.shape)\r\n decoder_outputs = self.decoder(\r\n input_ids=decoder_input_ids,\r\n attention_mask=decoder_attention_mask,\r\n encoder_hidden_states=encoder_outputs.last_hidden_state,\r\n )\r\n\r\n return Seq2SeqLMOutput(\r\n logits=decoder_outputs.logits,\r\n )\r\n\r\n def prepare_inputs_for_generation(self, input_ids, attention_mask=None, encoder_outputs=None, **kwargs):\r\n\r\n return {\r\n \"decoder_input_ids\": input_ids,\r\n \"decoder_atttention_mask\": input_ids,\r\n \"encoder_outputs\": encoder_outputs,\r\n }\r\n\r\nmodel = ORTModelForVision2Seq()\r\n\r\nstart = time.time()\r\nmodel.config.decoder_start_token_id = 2\r\nmodel.config.pad_token_id = processor.tokenizer.pad_token_id\r\nmodel.config.eos_token_id = processor.tokenizer.sep_token_id\r\nmodel.config.vocab_size = model.config.decoder.vocab_size\r\n\r\ngenerated_ids = model.generate(pixel_values.to(device))\r\nend = time.time()\r\n```\r\n\r\nThe class would be soon implemented in the `optimum` soon for easier inference. Stay tuned!\r\n\r\n", "@mht-sharma Thanks for the example! I tried implementing it. For the further implementation I looked at the [optimum/pipelines.py](https://github.com/huggingface/optimum/blob/816268d7c3aba0de98f2d74db06344e76f071535/optimum/pipelines.py) and at [optimum/onnxruntime/modeling_seq2seq.py](https://github.com/huggingface/optimum/blob/816268d7c3aba0de98f2d74db06344e76f071535/optimum/onnxruntime/modeling_seq2seq.py). \r\n\r\nBasically I took the examples from `modeling_seq2seq.py` for the `ORTEncoder` and `ORTDecoder`, and I took your example from above and initialize the `ORTModelForVision2Seq(VisionEncoderDecoderModel)` like this:\r\n```py\r\nclass ORTModelForVision2Seq(VisionEncoderDecoderModel):\r\n def __init__(self, *args, **kwargs):\r\n config = AutoConfig.from_pretrained('microsoft/trocr-base-str')\r\n super().__init__(config)\r\n self._device = \"cpu\"\r\n self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=[\"CPUExecutionProvider\"]), device='cpu')\r\n self.decoder = ORTDecoder(onnxruntime.InferenceSession(c.decoder_path, providers=[\"CPUExecutionProvider\"]), device='cpu')\r\n```\r\nThe encoder_path is the path to the file of `encoder.onnx` and the path to the decoder file is the path to `decoder.onnx`.\r\n\r\nFor your example, the ORTEncoder is initialized like this:\r\n```py\r\nclass ORTEncoder:\r\n \"\"\"\r\n Encoder model for ONNX Runtime inference.\r\n Arguments:\r\n session (`onnxruntime.InferenceSession`):\r\n The ONNX Runtime inference session associated to the encoder.\r\n \"\"\"\r\n\r\n def __init__(\r\n self, session: onnxruntime.InferenceSession, device: torch.device, main_input_name: str = \"input_ids\"\r\n ):\r\n self.session = session\r\n self._device = device\r\n self.main_input_name = main_input_name\r\n self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())}\r\n self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())}\r\n```\r\n\r\nWhen I initialize the Onnx InferenceSessions as shown in the first code block of this message, I get the following error: \r\n`self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=[\"CPUExecutionProvider\"]), device='cpu')\r\n File \"C:\\Users\\FrCa\\Miniconda3\\envs\\onnxfix\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1242, in __setattr__\r\n raise TypeError(\"cannot assign '{}' as child module '{}' \"\r\nTypeError: cannot assign '__main__.ORTEncoder' as child module 'encoder' (torch.nn.Module or None expected)\r\npython-BaseException`\r\n\r\nThe `ORTEncoder` seems to expect a path to a Pytorch model for its session, which seems odd. I am currently passing the onnx converted encoder to `ORTEncoder`, but due to the error, I have also tried passing the equivalent `.pth` model Additionally, I also tried passing None (which doesn't make much sense, but it says it is a possibility). Both of them also give errors.\r\n\r\n**EDIT**:\r\n\r\nI did find that by not adding the superclass of `VisionEncoderDecoderModel`, the model can initialize both the ORTEncoder and ORTDecoder. However, this causes the code to break, because the model does need the `config` attribute to work with the example that is provided here.\r\n```py\r\nclass ORTModelForVision2Seq():\r\n def __init__(self, *args, **kwargs):\r\n self._device = \"cpu\"\r\n self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=[\"CPUExecutionProvider\"]), device='cpu')\r\n self.decoder = ORTDecoder(onnxruntime.InferenceSession(c.decoder_path, providers=[\"CPUExecutionProvider\"]), device='cpu')\r\n```\r\n`model.config.decoder_start_token_id = 2\r\nAttributeError: 'ORTModelForVision2Seq' object has no attribute 'config'`\r\n\r\n", "Hi @Fritskee , apologies for the late reply. You need to inherit `ORTEncoder` and `ORTDecoder` from `torch.nn.Module` to avoid the issue.", "Hi @umanniyaz, please open a new issue instead", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,675
1,673
NONE
null
### System Info - `transformers` version: 4.23.0.dev0 - Platform: Linux-4.4.0-87-generic-x86_64-with-glibc2.23 - Python version: 3.9.13 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1 (True) ### Who can help? @NielsRogge, @patrickvonplaten ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am trying to convert a VisionEncoderDecoder model to ONNX using the feature that has been recently merged https://github.com/huggingface/transformers/pull/19254. However, when two pretrained models whose model dimensions are different, It reproduces errors as below. ## Model Load & Save ```python from transformers import VisionEncoderDecoderModel, BertTokenizer, AutoFeatureExtractor encoder_name_or_path = "hf-internal-testing/tiny-random-vit" decoder_name_or_path = "fnlp/bart-base-chinese" model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( encoder_name_or_path, decoder_name_or_path, ) tokenizer = BertTokenizer.from_pretrained(decoder_name_or_path) feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_name_or_path) output_dir = "outputs" model.save_pretrained(output_dir) feature_extractor.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) ``` ## Model Structure ``` VisionEncoderDecoderModel( (encoder): SwinModel(...) (decoder): BartForCausalLM(...) (enc_to_dec_proj): Linear(in_features=32, out_features=768, bias=True) ) ``` There exists a new linear layer to project encoder hidden states in [modeling_vision_encoder_decoder.py#L217](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L217) ```python # encoder outputs might need to be projected to different dimension for decoder if ( self.encoder.config.hidden_size != self.decoder.config.hidden_size and self.decoder.config.cross_attention_hidden_size is None ): self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size) ``` ## Conversion to ONNX ```bash python -m transformers.onnx --model=outputs/ --feature=vision2seq-lm onnx/ --atol 1e-3 ``` Output: ```bash Traceback (most recent call last): File "/home/user/anaconda3/envs/swinocr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/user/anaconda3/envs/swinocr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/backup2/mkf/transformers/src/transformers/onnx/__main__.py", line 180, in <module> main() File "/backup2/mkf/transformers/src/transformers/onnx/__main__.py", line 118, in main onnx_inputs, onnx_outputs = export( File "/backup2/mkf/transformers/src/transformers/onnx/convert.py", line 339, in export return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device) File "/backup2/mkf/transformers/src/transformers/onnx/convert.py", line 192, in export_pytorch onnx_export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/__init__.py", line 350, in export return utils.export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 163, in export _export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 1074, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 727, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 602, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 517, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 1851, in forward outputs = self.model.decoder( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 1104, in forward layer_outputs = decoder_layer( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 439, in forward hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 201, in forward key_states = self._shape(self.k_proj(key_value_states), -1, bsz) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x32 and 768x768) ``` ### Expected behavior It seems that the existing ONNX conversion for EncoderDecoderModel only converts the encoder and decoder, and ignores this linear layer. If I change the model to [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten), which has a similar structure and the same dimensions (i.e. no linear layer), the conversion works. ```bash python -m transformers.onnx --model=microsoft/trocr-base-handwritten --feature=vision2seq-lm trocr_onnx/ --atol 1e-3 ``` Thanks a lot for looking into it :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19811/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19811/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19810/comments
https://api.github.com/repos/huggingface/transformers/issues/19810/events
https://github.com/huggingface/transformers/pull/19810
1,419,068,918
PR_kwDOCUB6oc5BUyv0
19,810
[Doctest] Add `configuration_nezha.py`
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Add `configuration_nezha.py` to `utils/documentation_tests.txt` for doctest. Additionally, I updated its doctest format to make it consistent with BERT. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19810", "html_url": "https://github.com/huggingface/transformers/pull/19810", "diff_url": "https://github.com/huggingface/transformers/pull/19810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19810.patch", "merged_at": 1666612243000 }
https://api.github.com/repos/huggingface/transformers/issues/19809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19809/comments
https://api.github.com/repos/huggingface/transformers/issues/19809/events
https://github.com/huggingface/transformers/pull/19809
1,419,064,114
PR_kwDOCUB6oc5BUxtT
19,809
[Doctest] Add `configuration_plbart.py`
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Add `configuration_plbart.py` to `utils/documentation_tests.txt` for doctest. Additionally, I updated its doctest format to make it consistent with BERT. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19809", "html_url": "https://github.com/huggingface/transformers/pull/19809", "diff_url": "https://github.com/huggingface/transformers/pull/19809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19809.patch", "merged_at": 1666607575000 }
https://api.github.com/repos/huggingface/transformers/issues/19808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19808/comments
https://api.github.com/repos/huggingface/transformers/issues/19808/events
https://github.com/huggingface/transformers/pull/19808
1,419,061,378
PR_kwDOCUB6oc5BUxI-
19,808
[Doctest] Add `configuration_poolformer.py`
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Add `configuration_poolformer.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19808/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19808", "html_url": "https://github.com/huggingface/transformers/pull/19808", "diff_url": "https://github.com/huggingface/transformers/pull/19808.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19808.patch", "merged_at": 1666607626000 }
https://api.github.com/repos/huggingface/transformers/issues/19807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19807/comments
https://api.github.com/repos/huggingface/transformers/issues/19807/events
https://github.com/huggingface/transformers/pull/19807
1,419,059,700
PR_kwDOCUB6oc5BUw0L
19,807
[Doctest] Add `configuration_electra.py`
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Add `configuration_electra.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19807", "html_url": "https://github.com/huggingface/transformers/pull/19807", "diff_url": "https://github.com/huggingface/transformers/pull/19807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19807.patch", "merged_at": 1666607683000 }
https://api.github.com/repos/huggingface/transformers/issues/19806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19806/comments
https://api.github.com/repos/huggingface/transformers/issues/19806/events
https://github.com/huggingface/transformers/pull/19806
1,419,049,320
PR_kwDOCUB6oc5BUu08
19,806
[DOCTEST] `configuration_layoutlm.py` , `configuration_layoutlmv2.py` ,` configuration_layoutlmv3.py`
{ "login": "Revanth2002", "id": 68279005, "node_id": "MDQ6VXNlcjY4Mjc5MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/68279005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Revanth2002", "html_url": "https://github.com/Revanth2002", "followers_url": "https://api.github.com/users/Revanth2002/followers", "following_url": "https://api.github.com/users/Revanth2002/following{/other_user}", "gists_url": "https://api.github.com/users/Revanth2002/gists{/gist_id}", "starred_url": "https://api.github.com/users/Revanth2002/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Revanth2002/subscriptions", "organizations_url": "https://api.github.com/users/Revanth2002/orgs", "repos_url": "https://api.github.com/users/Revanth2002/repos", "events_url": "https://api.github.com/users/Revanth2002/events{/privacy}", "received_events_url": "https://api.github.com/users/Revanth2002/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19806). All of your documentation changes will be reflected on that endpoint.", "cc @ydshieh " ]
1,666
1,666
1,666
CONTRIBUTOR
null
Based on #19487
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19806", "html_url": "https://github.com/huggingface/transformers/pull/19806", "diff_url": "https://github.com/huggingface/transformers/pull/19806.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19806.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19805/comments
https://api.github.com/repos/huggingface/transformers/issues/19805/events
https://github.com/huggingface/transformers/pull/19805
1,419,043,801
PR_kwDOCUB6oc5BUtzr
19,805
[DOCTEST] Add `configuration_mbart.py` , `configuration_mctc.py`
{ "login": "Revanth2002", "id": 68279005, "node_id": "MDQ6VXNlcjY4Mjc5MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/68279005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Revanth2002", "html_url": "https://github.com/Revanth2002", "followers_url": "https://api.github.com/users/Revanth2002/followers", "following_url": "https://api.github.com/users/Revanth2002/following{/other_user}", "gists_url": "https://api.github.com/users/Revanth2002/gists{/gist_id}", "starred_url": "https://api.github.com/users/Revanth2002/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Revanth2002/subscriptions", "organizations_url": "https://api.github.com/users/Revanth2002/orgs", "repos_url": "https://api.github.com/users/Revanth2002/repos", "events_url": "https://api.github.com/users/Revanth2002/events{/privacy}", "received_events_url": "https://api.github.com/users/Revanth2002/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @ydshieh " ]
1,666
1,666
1,666
CONTRIBUTOR
null
Based on #19487
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19805", "html_url": "https://github.com/huggingface/transformers/pull/19805", "diff_url": "https://github.com/huggingface/transformers/pull/19805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19805.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19804/comments
https://api.github.com/repos/huggingface/transformers/issues/19804/events
https://github.com/huggingface/transformers/pull/19804
1,418,940,322
PR_kwDOCUB6oc5BUXL4
19,804
Make Conv1D.bias optional
{ "login": "comaniac", "id": 8262694, "node_id": "MDQ6VXNlcjgyNjI2OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8262694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/comaniac", "html_url": "https://github.com/comaniac", "followers_url": "https://api.github.com/users/comaniac/followers", "following_url": "https://api.github.com/users/comaniac/following{/other_user}", "gists_url": "https://api.github.com/users/comaniac/gists{/gist_id}", "starred_url": "https://api.github.com/users/comaniac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/comaniac/subscriptions", "organizations_url": "https://api.github.com/users/comaniac/orgs", "repos_url": "https://api.github.com/users/comaniac/repos", "events_url": "https://api.github.com/users/comaniac/events{/privacy}", "received_events_url": "https://api.github.com/users/comaniac/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger thanks for the comment. Then in the case that if I'd like to disable bias in GPT-2 models, should we directly change the GPT-2 model implementation to avoid custom Conv1D, and deprecate it?", "You can change the modeling code as you'd like to suit your need. We won't upstream it since it's not in the official GPT-2 code though.", "So the code change guides are summarized as follows:\r\n1. Official model code (e.g., GPT-2) cannot be changed to deprecate the use of Conv1D.\r\n2. Conv1D in transformers is legacy and only used in some existing models (e.g., GPT-2).\r\n\r\nIIUC, then I still feel this PR is required, as Conv1D cannot be deprecated and is still a \"building block\" of some models. Is there any strong restriction (e.g., code freeze) that these legacy code cannot be changed?\r\n\r\nThanks.", "You can open a PR to remove the Conv1D use in GPT-2 and, but we won't add a new argument to control the bias like this. You can copy the modeling code and adapt it to your needs if this is something you want yourself.", "I see. Yeah that makes sense and is actually my intention. I don't plan to add an argument to existing GPT-2. I just want to make sure I don't need to change other places in transformers to disable the bias in GPT-2. Thanks." ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? A simple change to make Conv1D.bias optional. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Conv1D is used in GPT models: @patrickvonplaten, @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19804", "html_url": "https://github.com/huggingface/transformers/pull/19804", "diff_url": "https://github.com/huggingface/transformers/pull/19804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19804.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19803/comments
https://api.github.com/repos/huggingface/transformers/issues/19803/events
https://github.com/huggingface/transformers/pull/19803
1,418,893,123
PR_kwDOCUB6oc5BUM0B
19,803
Fix bug in Wav2Vec2's GPU tests
{ "login": "falcaopetri", "id": 8387736, "node_id": "MDQ6VXNlcjgzODc3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/falcaopetri", "html_url": "https://github.com/falcaopetri", "followers_url": "https://api.github.com/users/falcaopetri/followers", "following_url": "https://api.github.com/users/falcaopetri/following{/other_user}", "gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}", "starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions", "organizations_url": "https://api.github.com/users/falcaopetri/orgs", "repos_url": "https://api.github.com/users/falcaopetri/repos", "events_url": "https://api.github.com/users/falcaopetri/events{/privacy}", "received_events_url": "https://api.github.com/users/falcaopetri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @falcaopetri Thank you for the fix. I run the 3 tests against this PR, but for `test_wav2vec2_with_lm_invalid_pool` (bot PT/TF), I get the following error:\r\n\r\n```bash\r\n> self.assertIn(\"Falling back to sequential decoding.\", cl.out)\r\nE AssertionError: 'Falling back to sequential decoding.' not found in ''\r\n```\r\n\r\ni.e. `cl.out` is empty string here.\r\n\r\nCould you double check here, please?\r\n\r\n`test_wav2vec2_with_lm_pool` is fixed by this PR though!\r\n", "Hi @ydshieh. This one is trickier.\r\n\r\nAs I was trying to fix the previous issue, I ended up changing the execution path by changing \r\n\r\n```diff\r\n-processor.batch_decode(logits.numpy())\r\n+processor.batch_decode(logits.cpu().numpy(), pool)\r\n```\r\n(see https://github.com/huggingface/transformers/pull/19803/commits/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b#diff-1063ef75ba73fe97fec48faf71f5020152ca85811784caaef74d4ca18fc6049fL1674-L1677).\r\n\r\nI was aiming to test this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L399-L402\r\n\r\nBut that's protected by a https://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L391\r\n\r\nAs I see, we could:\r\n1. Don't test this execution path\r\n2. Add back the `multiprocessing.set_start_method(\"spawn\")` approach.\r\n - Pros: Previous bug happened because I was calling `set_start_method` twice, but we could re-implement it using a single call\r\n - Cons: This affects the whole runtime process, and since all GPU's tests are running in the same process, we could potentially break other tests (I didn't find any potential such tests in the code base, but we could have them in the future)\r\n\r\n---\r\nOne thing that caught my attention is that I'd expect the PR's CI/CD tests to fail. Looking the logs of `tests_torch*` I found:\r\n\r\n```\r\ntests_torch: ================ 130 passed, 218 skipped, 46 warnings in 32.78s ================\r\ntests_torch_and_tf: ============================== 4 passed in 21.11s ==============================\r\n```\r\n\r\nIs `tests_torch` expected to skip all these tests? I also noticed that `tests_tf` uses `pytest -rA` flag, which makes it easier to debug which tests were skipped. `tests_torch` does not.", "Hi @falcaopetri \r\n\r\n- what if we don't pass `pool` here\r\nhttps://github.com/huggingface/transformers/blob/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b/tests/models/wav2vec2/test_modeling_flax_wav2vec2.py#L612\r\nwould it work and test the target execution path we want?\r\n- Regarding `calling set_start_method twice`, do you mean one in PyTorch test and another one in TensorFlow test?", "Regarding tests being skipped in PR CI, that is expected, as the relevant tests here are implemented in `Wav2Vec2ModelIntegrationTest` class, which is decorated with `@slow`, so those tests are run only after a PR being merged into `main`.", "> * what if we don't pass `pool` here\r\n> would it work and test the target execution path we want?\r\n\r\nThe target execution path is meant to alert non-unix users about the current limitations on these platforms when they use `batch_decode+don't specify a pool`. So yes, we shouldn't pass `pool` here.\r\n\r\nhttps://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L391-L403\r\n\r\n> * Regarding `calling set_start_method twice`, do you mean one in PyTorch test and another one in TensorFlow test?\r\n\r\nI'm not sure if I understood your question, but yes, `set_start_method` was being called on every test that required us to \"simulate\" we were on a non-unix platform (where `spawn` is default), i.e., it was used in `test_wav2vec2_with_lm_invalid_pool` from PT, TF and Flax tests. `set_start_method` should be called only once within a process though.\r\n\r\n----\r\n\r\nA little bit of context:\r\n- `pyctcdecode` does not currently work well with `spawn` contexts: if a `spawn` pool is passed, it will print a warning message and ignore the pool (see https://github.com/kensho-technologies/pyctcdecode/commit/a477d796e232b476ee8b877efba98aa2d822232e)\r\n- The change I've implemented is unrelated to this: `batch_decode` just needed to allow passing a `pool`, since default behavior is to create a new pool for every call (i.e., there was a huge overhead when processing multiple audios)\r\n- Since I was changing the code, I added a \"safeguard\" to the default behavior: \"if no pool is specified, we create a pool only if we are on unix\". The target execution path of the failing test is **\"no pool specified and not on unix\"**: instead of starting a spawn `Pool` that we know will be ignored by `pyctcdecode`, **we skip its creation and warn users.**\r\n\r\nSo, technically, we could remove this \"safeguard\", instantiate a `Pool` independently whether it will or won't be used by `pyctcdecode`, and let it handle the user warning if necessary. ", "As you mentioned, the failing test mentioned in my comment is because the targe path is not executed (so we don't get the desired warning), and the reason is the `pool` is passed, my suggestion is to remove `pool` in\r\nhttps://github.com/huggingface/transformers/blob/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b/tests/models/wav2vec2/test_modeling_wav2vec2.py#L1678\r\nand see if the test will pass (i.e. if the remaining part will do whatever they are expected to do).\r\n\r\nIf this still fails (and the fix would involve more things), we can probably try to use a method recently intorduced\r\nhttps://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/testing_utils.py#L1678\r\n(which could avoid the problem of `calling set_start_method twice`). But we can work on this on our side :-), and I am fine to merge this PR as it is.\r\n\r\nThank you for all the explanation :-) really appreciated, @falcaopetri !\r\n\r\n\r\n", "`transcription = processor.batch_decode(logits.cpu().numpy()).text` would fail in Unix envs (e.g., during CI/CD).\r\nThis happens because `fork` is the default multiprocessing context, hence `set_start_method('spark')` is required.\r\n\r\n`run_test_in_subprocess` is really helpful in this case. I took the liberty to try it out myself (see new commit). I got TF, PT and Flax working correctly. Tested in both CPU and GPU, by running in Colab the following\r\n`RUN_SLOW=yes pytest -k test_wav2vec2_with_lm_invalid_pool tests/models/wav2vec2/test_modeling*_wav2vec2.py`.\r\n\r\n----\r\nAs a side note, I had to force `\"torchaudio<0.12\"` on my local env and Colab, otherwise the following failed:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"common_voice\", \"es\", split=\"test\", streaming=True)\r\n>>> sample = next(iter(ds))\r\n...\r\nFile \"/opt/miniconda/base/lib/python3.8/site-packages/datasets/features/audio.py\", line 273, in _decode_mp3\r\n array, sampling_rate = torchaudio.load(path_or_file, format=\"mp3\")\r\nFile \"/opt/miniconda/base/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py\", line 214, in load\r\n return _fallback_load_fileobj(filepath, frame_offset, num_frames, normalize, channels_first, format)\r\nFile \"/opt/miniconda/base/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py\", line 33, in _fail_load_fileobj\r\n raise RuntimeError(f\"Failed to load audio from {fileobj}\")\r\nRuntimeError: Failed to load audio from <_io.BytesIO object at 0x130509bd0>\r\n```\r\n\r\nI installed transformers with `[torch-speech,flax,torch,tf]` extras.", "Also cc @sanchit-gandhi here", "For @sgugger to take a quick look before I can merge\r\n", "Thanks for all the help @ydshieh! I'm really happy to help improving `transformers` (and to fix my failing tests 😅)!\r\n\r\n> Still think this should be in a decorator\r\n\r\n@sgugger you mean `run_test_in_subprocess` API right? Given its signature (`run_test_in_subprocess(test_case, target_func, inputs=None, timeout=600)`) I was ready to just use it as a decorator in my tests. Then I saw its usage in `Whisper`'s and realized the pickling concerns. IMHO it would have a much cleaner API if feasible though.", "Thanks for the fix @falcaopetri! The new testing logic LGTM!", "@falcaopetri Not sure how it's feasible exactly, but will definitely try something :-)", "Sorry, I thought I merged this PR, but actually not. Thanks @sgugger " ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes bugs introduced in #18351. Bugs appeared only when running tests in GPU, as reported and explained in https://github.com/huggingface/transformers/pull/18351#issuecomment-1285251004. In summary, some tests introduced in the previous PR: - Failed when running on the same process - Failed to call `.cpu()` before calling `.numpy()` ## Who can review? @ydshieh, @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19803/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19803", "html_url": "https://github.com/huggingface/transformers/pull/19803", "diff_url": "https://github.com/huggingface/transformers/pull/19803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19803.patch", "merged_at": 1666875603000 }
https://api.github.com/repos/huggingface/transformers/issues/19802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19802/comments
https://api.github.com/repos/huggingface/transformers/issues/19802/events
https://github.com/huggingface/transformers/issues/19802
1,418,873,678
I_kwDOCUB6oc5UkktO
19,802
Run Vit-MAE script key error
{ "login": "RobertHua96", "id": 17196884, "node_id": "MDQ6VXNlcjE3MTk2ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/17196884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobertHua96", "html_url": "https://github.com/RobertHua96", "followers_url": "https://api.github.com/users/RobertHua96/followers", "following_url": "https://api.github.com/users/RobertHua96/following{/other_user}", "gists_url": "https://api.github.com/users/RobertHua96/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobertHua96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobertHua96/subscriptions", "organizations_url": "https://api.github.com/users/RobertHua96/orgs", "repos_url": "https://api.github.com/users/RobertHua96/repos", "events_url": "https://api.github.com/users/RobertHua96/events{/privacy}", "received_events_url": "https://api.github.com/users/RobertHua96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You'll need to add `remove_unused_columns=False` to the `TrainingArguments`.\r\n\r\nThe reason is because of the use of `set_transform` when preparing the datasets, which does things on-the-fly. Hence we still need to `image` column in the datasets to turn them into `pixel_values`.", "Amazing thank you!" ]
1,666
1,666
1,666
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge @sg ### Information - [x] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've put down the main parts of the run MAE script [https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mae.py] into a colab notebook: [https://colab.research.google.com/drive/1WtOTp-ocbBTgVXFiEXuY8MWhz_ex9OBy?usp=sharing] When I reach the trainer.train() step I get a key error from the data collator function - is the script perhaps outdated, or am I doing something wrong? ### Expected behavior Trainer starts training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19802/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19801/comments
https://api.github.com/repos/huggingface/transformers/issues/19801/events
https://github.com/huggingface/transformers/pull/19801
1,418,865,685
PR_kwDOCUB6oc5BUG8J
19,801
Generate: minor docstring fix
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Fixed\r\n<img width=\"834\" alt=\"Screenshot 2022-10-21 at 22 28 30\" src=\"https://user-images.githubusercontent.com/12240844/197291473-5c5c8984-d553-4e7c-85a3-94a254e452f8.png\">\r\n" ]
1,666
1,666
1,666
MEMBER
null
# What does this PR do? Fixes this: <img width="816" alt="Screenshot 2022-10-21 at 22 12 40" src="https://user-images.githubusercontent.com/12240844/197289701-0092c00a-2aec-4c72-96ba-fb2df4ce14b1.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19801/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19801", "html_url": "https://github.com/huggingface/transformers/pull/19801", "diff_url": "https://github.com/huggingface/transformers/pull/19801.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19801.patch", "merged_at": 1666518407000 }
https://api.github.com/repos/huggingface/transformers/issues/19800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19800/comments
https://api.github.com/repos/huggingface/transformers/issues/19800/events
https://github.com/huggingface/transformers/pull/19800
1,418,807,206
PR_kwDOCUB6oc5BT6Uo
19,800
Added translation of run_scripts.mdx to Portuguese Issue #16824
{ "login": "davialvb", "id": 34287081, "node_id": "MDQ6VXNlcjM0Mjg3MDgx", "avatar_url": "https://avatars.githubusercontent.com/u/34287081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davialvb", "html_url": "https://github.com/davialvb", "followers_url": "https://api.github.com/users/davialvb/followers", "following_url": "https://api.github.com/users/davialvb/following{/other_user}", "gists_url": "https://api.github.com/users/davialvb/gists{/gist_id}", "starred_url": "https://api.github.com/users/davialvb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davialvb/subscriptions", "organizations_url": "https://api.github.com/users/davialvb/orgs", "repos_url": "https://api.github.com/users/davialvb/repos", "events_url": "https://api.github.com/users/davialvb/events{/privacy}", "received_events_url": "https://api.github.com/users/davialvb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #16824 Currently, only the run_scripts.mdx file was translated as of this PR. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19800", "html_url": "https://github.com/huggingface/transformers/pull/19800", "diff_url": "https://github.com/huggingface/transformers/pull/19800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19800.patch", "merged_at": 1666388316000 }
https://api.github.com/repos/huggingface/transformers/issues/19799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19799/comments
https://api.github.com/repos/huggingface/transformers/issues/19799/events
https://github.com/huggingface/transformers/pull/19799
1,418,754,998
PR_kwDOCUB6oc5BTvNR
19,799
Refactor conversion function
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This PR introduces a refactor of the conversion functions to be able to re-use with safetensors. It comes with zero change of code inside but the main takeaway is to build two functions that take a PyTorch-formatted (resp. TF-formatted) state dict and load it in a TF (resp. PyTorch) model. The state dict is just a dictionary name to tensor, and the tensor car be a NumPy array (as it was) or a torch/tf tensor (which it will be with safetensors).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19799", "html_url": "https://github.com/huggingface/transformers/pull/19799", "diff_url": "https://github.com/huggingface/transformers/pull/19799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19799.patch", "merged_at": 1666633720000 }
https://api.github.com/repos/huggingface/transformers/issues/19798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19798/comments
https://api.github.com/repos/huggingface/transformers/issues/19798/events
https://github.com/huggingface/transformers/pull/19798
1,418,581,927
PR_kwDOCUB6oc5BTKQJ
19,798
Fix error/typo in docstring of TokenClassificationPipeline
{ "login": "pchr8", "id": 5071361, "node_id": "MDQ6VXNlcjUwNzEzNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5071361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pchr8", "html_url": "https://github.com/pchr8", "followers_url": "https://api.github.com/users/pchr8/followers", "following_url": "https://api.github.com/users/pchr8/following{/other_user}", "gists_url": "https://api.github.com/users/pchr8/gists{/gist_id}", "starred_url": "https://api.github.com/users/pchr8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pchr8/subscriptions", "organizations_url": "https://api.github.com/users/pchr8/orgs", "repos_url": "https://api.github.com/users/pchr8/repos", "events_url": "https://api.github.com/users/pchr8/events{/privacy}", "received_events_url": "https://api.github.com/users/pchr8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
Fixes #19797 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19798", "html_url": "https://github.com/huggingface/transformers/pull/19798", "diff_url": "https://github.com/huggingface/transformers/pull/19798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19798.patch", "merged_at": 1666371197000 }
https://api.github.com/repos/huggingface/transformers/issues/19797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19797/comments
https://api.github.com/repos/huggingface/transformers/issues/19797/events
https://github.com/huggingface/transformers/issues/19797
1,418,580,944
I_kwDOCUB6oc5UjdPQ
19,797
Small typo in documentation of TokenClassificationPipeline
{ "login": "pchr8", "id": 5071361, "node_id": "MDQ6VXNlcjUwNzEzNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5071361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pchr8", "html_url": "https://github.com/pchr8", "followers_url": "https://api.github.com/users/pchr8/followers", "following_url": "https://api.github.com/users/pchr8/following{/other_user}", "gists_url": "https://api.github.com/users/pchr8/gists{/gist_id}", "starred_url": "https://api.github.com/users/pchr8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pchr8/subscriptions", "organizations_url": "https://api.github.com/users/pchr8/orgs", "repos_url": "https://api.github.com/users/pchr8/repos", "events_url": "https://api.github.com/users/pchr8/events{/privacy}", "received_events_url": "https://api.github.com/users/pchr8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,666
1,666
1,666
CONTRIBUTOR
null
### System Info https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/pipelines/token_classification.py#L176 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Open the documentation https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TokenClassificationPipeline.__call__ > word (str) — The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, use start and stop. ### Expected behavior I'm quite sure it should be "`start` and `end`"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19796/comments
https://api.github.com/repos/huggingface/transformers/issues/19796/events
https://github.com/huggingface/transformers/pull/19796
1,418,538,553
PR_kwDOCUB6oc5BTA9y
19,796
Add Image Processors
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger @LysandreJik @alaradirik @NielsRogge LMK if you'd rather I did this as many, separate feature extractor PRs for each model type, or as proposed in the description: CLIP as a demonstration, merging the other PRs into this one and then a final PR. ", "> @sgugger @LysandreJik @alaradirik @NielsRogge LMK if you'd rather I did this as many, separate feature extractor PRs for each model type, or as proposed in the description: CLIP as a demonstration, merging the other PRs into this one and then a final PR.\r\n\r\n@amyeroberts nice work :) I'm fine with reviewing all image processors in this PR. ", "@sgugger @alaradirik @NielsRogge @LysandreJik - All of the other models image processors are now merged in and all tests passing. I've added a few comments highlighting some design decisions. ", "@amyeroberts I get a 'cannot import name' error when I try to import XXXImageProcessor classes like this:\r\n`from transformers import XXXImageProcessor`\r\n\r\nJust wanted to double check if everything can be imported without any issues on your side?", "> @amyeroberts I get a 'cannot import name' error when I try to import XXXImageProcessor classes like this:\r\nfrom transformers import XXXImageProcessor\r\nJust wanted to double check if everything can be imported without any issues on your side?\r\n\r\n@alaradirik It's not possible (yet) to directly import the image processors. This PR makes an alias for the feature extractors, such that if you do `from transformers import XXXFeatureExtractor` it will import the equivalent image processor. Enabling these imports will come in a set of follow up PRs which will handling completely replacing the feature extractors and making the image processors the official objects to use. This will include: making image processors directly importable; adding the `AutoImageProcessor` class; updating & expanding image processor documentation; replacing feature_extractor with `image_processor` in examples. ", "@alaradirik @NielsRogge Can you give a final review and let me know if we're good to merge? " ]
1,666
1,667
1,667
COLLABORATOR
null
# What does this PR do? Adds image processors for most vision models in the transfomers library. CLIP for first review. Once it's had approval, I'll merge the other processors into this branch, ask for a final review and then merge if all good. Other models with more complex processing logic e.g. DETR will have subsequent PRs. **🚨🚨🚨 `size` parameter 🚨🚨🚨** The most important change here is how `size` is recorded in the configurations and passed around in the processing logic. * Some feature extractors' had `size` recorded as a tuple in `(width, height)` format, and others in `(height, width)` format. * To remove ambiguity, any new configurations will have `size` as a dictionary - `{"height": h, "width"}` - `{"shortest_edge": s}`: some feature extractors `size` indicates the length the shortest should be resized to - `{"shortest_edge": s, "longest_edge": l}`: same as above, but also places upper limit on the longest edge * `get_size_dict` is a helper function to keep backwards compatibility with old configs. It takes old size arguments and converts them to the equivalent dict. This is applied at `__init__`, `preprocess` and any relevant transforms e.g. `resize` where old size arguments can be passed. Other models: * [x] [BeiT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-beit) * [x] [ConvNeXT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-convnext) * [x] [DeiT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-deit) * [x] [DPT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-dpt) * [x] [Flava](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-flava) * [x] [ImageGPT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-imagegpt) * [x] [LayoutLM](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-layoutlm) * [x] [LeViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-levit) * [x] [MobileViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-mobilevit) * [x] [Perceiver](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-perceiver) * [x] [PoolFormer](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-poolformer) * [x] [SegFormer](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-segformer) * [x] [VideoMAE](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-videomae) * [x] [Vilt](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-vilt) * [x] [ViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-vit) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19796/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19796", "html_url": "https://github.com/huggingface/transformers/pull/19796", "diff_url": "https://github.com/huggingface/transformers/pull/19796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19796.patch", "merged_at": 1667390256000 }
https://api.github.com/repos/huggingface/transformers/issues/19795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19795/comments
https://api.github.com/repos/huggingface/transformers/issues/19795/events
https://github.com/huggingface/transformers/issues/19795
1,418,532,126
I_kwDOCUB6oc5UjRUe
19,795
Strange shape of Scores vector
{ "login": "andreabac3", "id": 36055796, "node_id": "MDQ6VXNlcjM2MDU1Nzk2", "avatar_url": "https://avatars.githubusercontent.com/u/36055796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreabac3", "html_url": "https://github.com/andreabac3", "followers_url": "https://api.github.com/users/andreabac3/followers", "following_url": "https://api.github.com/users/andreabac3/following{/other_user}", "gists_url": "https://api.github.com/users/andreabac3/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreabac3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreabac3/subscriptions", "organizations_url": "https://api.github.com/users/andreabac3/orgs", "repos_url": "https://api.github.com/users/andreabac3/repos", "events_url": "https://api.github.com/users/andreabac3/events{/privacy}", "received_events_url": "https://api.github.com/users/andreabac3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @andreabac3 👋 \r\n\r\nI apologize in advance. The documentation for that part of the codebase is poor at the moment, so it's completely understandable that you feel confused.\r\n\r\nThe first documentation issue is the shape of `outputs.scores[0].shape`, which is actually `(batch_size * beam_size, vocab_size)`. It contains the scores (logits) of each token for each beam at each step. However, on their own, these scores are not very helpful. \r\n\r\nThe most common use case is to use this tensor to obtain the scores of the selected tokens for each output -- we have an undocumented function for that (second documentation issue, tracked in https://github.com/huggingface/transformers/issues/18616), [`compute_transition_beam_scores`](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876). You can call it as \r\n```python\r\nmodel.compute_transition_beam_scores(outputs.sequences, outputs.scores, outputs.beam_indices)\r\n```\r\nand it returns a tensor of shape `(num_return_sequences, seq_len)`. For instance, the value as `[2, 6]` corresponds to score of the 7th token for the 3rd returned sequence. If you sum these scores across the `seq_len` axis and divide by the length of the sequence (and apply the appropriate `length_penalty` scaling, if needed), you will obtain back `output.sequences_scores`.\r\n\r\nI hope this helps! We will be revisiting the documentation soon to make all this clear. Let me know if you have further questions 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @gante 👋, \r\nsorry for the late answer.\r\n\r\nDon't worry about the missing documentation; the function worked perfectly.\r\n\r\nThanks again for your support,\r\nAndrea" ]
1,666
1,670
1,669
CONTRIBUTOR
null
### System Info transformers==4.20.1 torch==1.11.0+cu113 Python 3.9.13 ### Who can help? @patrickvonplaten @Narsil @ola13 @gante ### Information - [X] The official example scripts ### Reproduction Hello everyone, I am using BART and I have enabled `return_scores = True` with `beam_size = 4`. The shape of the Scores vector is `(seq_len, batch_size * beam_size * num_returned sequence, vocab_size)`. I would like to know how I can subdivide the vector obtaining the shape `(seq_len, batch_size, beam_size, num_returned_sequence, vocab_size)`. Because at the moment it is not possible to map the input sentences in a batch with the output. Kind regards, Andrea ps: I also opened a [blog post](https://discuss.huggingface.co/t/strange-shape-of-scores-vector/24780), I did not know which was the most appropriate place, sorry for the duplicate. ```python3 outputs = model.generate( input_ids=source_ids, max_length = 500, return_dict_in_generate=True, output_scores=True, num_beams=4 ) print(len(outputs.scores)) # seq_len print(outputs.scores[0].shape) # (batch_size * beam_size * num_returned sequence, vocab_size) ``` ### Expected behavior a score vector with a shape of (seq_len, batch_size, beam_size, num_returned_sequence, vocab_size)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19795/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19794/comments
https://api.github.com/repos/huggingface/transformers/issues/19794/events
https://github.com/huggingface/transformers/pull/19794
1,418,451,600
PR_kwDOCUB6oc5BSuK4
19,794
Use None to detect if truncation was unset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This PR changes the default of the `truncation` argument to `None` so that we can detect the difference between: - truncation was not set - truncation was set to `False`. As pointed out in #19790, relying on `False` to test if the argument is unset yields unexpected behaviors. Fixes #19790
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19794/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19794/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19794", "html_url": "https://github.com/huggingface/transformers/pull/19794", "diff_url": "https://github.com/huggingface/transformers/pull/19794.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19794.patch", "merged_at": 1666371217000 }
https://api.github.com/repos/huggingface/transformers/issues/19793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19793/comments
https://api.github.com/repos/huggingface/transformers/issues/19793/events
https://github.com/huggingface/transformers/pull/19793
1,418,424,006
PR_kwDOCUB6oc5BSoRq
19,793
Update doc for revision and token
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This PR updates the docstrings of all `from_pretrained` methods to: - adapt the documentation for `use_auth_token` - remove the Tip about `use_auth_token=True` for private models - add a tip about checking out PRs using the revision argument (For now just did the changes on the config but will copy paste everywhere once I have had opinions :-) )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19793/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19793", "html_url": "https://github.com/huggingface/transformers/pull/19793", "diff_url": "https://github.com/huggingface/transformers/pull/19793.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19793.patch", "merged_at": 1666715535000 }
https://api.github.com/repos/huggingface/transformers/issues/19792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19792/comments
https://api.github.com/repos/huggingface/transformers/issues/19792/events
https://github.com/huggingface/transformers/pull/19792
1,418,401,800
PR_kwDOCUB6oc5BSjjE
19,792
Fix nightly test setup
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? The nightly tests were not properly triggered because I made a typo in the variable name to set 🤦‍♂️ . This is fixed since yesterday., but now there is a failure in the setup (see [here](https://app.circleci.com/jobs/github/huggingface/transformers/597479?utm_campaign=workflow-failed&utm_medium=email&utm_source=notification)) because I forgot to checkout the repo.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19792/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19792", "html_url": "https://github.com/huggingface/transformers/pull/19792", "diff_url": "https://github.com/huggingface/transformers/pull/19792.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19792.patch", "merged_at": 1666362391000 }
https://api.github.com/repos/huggingface/transformers/issues/19791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19791/comments
https://api.github.com/repos/huggingface/transformers/issues/19791/events
https://github.com/huggingface/transformers/pull/19791
1,418,361,061
PR_kwDOCUB6oc5BSaya
19,791
Remove undefined pytorch_model
{ "login": "ftorres16", "id": 36959980, "node_id": "MDQ6VXNlcjM2OTU5OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/36959980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ftorres16", "html_url": "https://github.com/ftorres16", "followers_url": "https://api.github.com/users/ftorres16/followers", "following_url": "https://api.github.com/users/ftorres16/following{/other_user}", "gists_url": "https://api.github.com/users/ftorres16/gists{/gist_id}", "starred_url": "https://api.github.com/users/ftorres16/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftorres16/subscriptions", "organizations_url": "https://api.github.com/users/ftorres16/orgs", "repos_url": "https://api.github.com/users/ftorres16/repos", "events_url": "https://api.github.com/users/ftorres16/events{/privacy}", "received_events_url": "https://api.github.com/users/ftorres16/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? The docs recommend running `del pytorch_model` to free memory, but `pytorch_model` has never been defined. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19791", "html_url": "https://github.com/huggingface/transformers/pull/19791", "diff_url": "https://github.com/huggingface/transformers/pull/19791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19791.patch", "merged_at": 1666360004000 }
https://api.github.com/repos/huggingface/transformers/issues/19790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19790/comments
https://api.github.com/repos/huggingface/transformers/issues/19790/events
https://github.com/huggingface/transformers/issues/19790
1,418,286,708
I_kwDOCUB6oc5UiVZ0
19,790
`truncation='do_not_truncate'` is not equivalent to `truncation=False`
{ "login": "urialon", "id": 15002544, "node_id": "MDQ6VXNlcjE1MDAyNTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/15002544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/urialon", "html_url": "https://github.com/urialon", "followers_url": "https://api.github.com/users/urialon/followers", "following_url": "https://api.github.com/users/urialon/following{/other_user}", "gists_url": "https://api.github.com/users/urialon/gists{/gist_id}", "starred_url": "https://api.github.com/users/urialon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/urialon/subscriptions", "organizations_url": "https://api.github.com/users/urialon/orgs", "repos_url": "https://api.github.com/users/urialon/repos", "events_url": "https://api.github.com/users/urialon/events{/privacy}", "received_events_url": "https://api.github.com/users/urialon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like the PR that set the current truncation/passing arguments has some behavior for backward compatibility that is triggered when `truncation` is unset. However, instead of having `truncation=None` as default (to make sure to detect when it's unset), it uses `truncation=False` as default. So in this instance, even if you passed along `truncation=False`, it activates those tests for backward compatibility.\r\n\r\nIt is way safer to use `truncation=\"do_not_truncate\"` to avoid this. I'll investigate if we can fix this without breaking anything (by changing to `truncation=None` as default for unset beahvior). ", "Thanks!\r\n\r\nYeah I know that it's safer to use `truncation=\"do_not_truncate\"`,\r\nbut passing booleans is kind of safer than strings in general (and it's shorter), \r\nso (since the docs allow it) I passed `truncation=False` and I found it very unpredictable that truncation was applied, even when I explicitly passed it.", "Awesome, thanks!" ]
1,666
1,666
1,666
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @SaulLu @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base") sent = 'The quick brown fox jumps over the lazy dog' len(tokenizer.encode(sent, max_length=5, truncation='do_not_truncate')) ``` prints: `11` BUT: ```python len(tokenizer.encode(sent, max_length=5, truncation=False)) ``` prints: `5` ### Expected behavior Hi, I would expect that `truncation='do_not_truncate'` would be always equivalent to `truncation=False`. This manual: https://huggingface.co/docs/transformers/pad_truncation and this doc https://huggingface.co/docs/transformers/main_classes/tokenizer say that: >`False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. Which means that they are supposed to be equivalent (regardless of what they do, they should behave the same). However, when using `truncation=False` and providing any value for `max_length`, it defaults to `'longest_first'` truncation strategy. Whether this default behavior is natural or not, isn't `False` supposed to be identical to `'do_not_truncate'`? This leads to a situation when the user explicitly specifies `truncation=False` but the text **is tokenized**. I suggest that `truncation=False` should always mean "no truncation", no matter what, regardless of `max_length` was supplied or not. I think that this is the expected behavior by any user: explicitly specifying `truncation=False` should mean no truncation, regardless of other parameters. Thanks, Uri
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19789/comments
https://api.github.com/repos/huggingface/transformers/issues/19789/events
https://github.com/huggingface/transformers/pull/19789
1,418,216,666
PR_kwDOCUB6oc5BR7aJ
19,789
add greek translation to index
{ "login": "ArchontisKostis", "id": 77233507, "node_id": "MDQ6VXNlcjc3MjMzNTA3", "avatar_url": "https://avatars.githubusercontent.com/u/77233507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArchontisKostis", "html_url": "https://github.com/ArchontisKostis", "followers_url": "https://api.github.com/users/ArchontisKostis/followers", "following_url": "https://api.github.com/users/ArchontisKostis/following{/other_user}", "gists_url": "https://api.github.com/users/ArchontisKostis/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArchontisKostis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArchontisKostis/subscriptions", "organizations_url": "https://api.github.com/users/ArchontisKostis/orgs", "repos_url": "https://api.github.com/users/ArchontisKostis/repos", "events_url": "https://api.github.com/users/ArchontisKostis/events{/privacy}", "received_events_url": "https://api.github.com/users/ArchontisKostis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19789). All of your documentation changes will be reflected on that endpoint.", "Yes, Greek under ISO 639-1 would be `el`, which is what we use. `gre` corresponds to 639-2. \r\n\r\n(source: https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)", "Hello! I am really sorry for the mistake I will fix it shortly!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry missed the change! It looks like there is a problem with CircleCI (tests are not run). Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nCould you also add the Greek language code [here](https://github.com/huggingface/transformers/blob/11f3ec7224c83c9e5c379a774b9d3984e68d26fa/.github/workflows/build_documentation.yml#L18) and [there](https://github.com/huggingface/transformers/blob/11f3ec7224c83c9e5c379a774b9d3984e68d26fa/.github/workflows/build_pr_documentation.yml#L17) so that the doc is built in Greek?\r\n\r\nlet us know if you need any help, thanks!" ]
1,666
1,669
1,669
NONE
null
# What does this PR do? Fixes #19788 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19789", "html_url": "https://github.com/huggingface/transformers/pull/19789", "diff_url": "https://github.com/huggingface/transformers/pull/19789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19789.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19788/comments
https://api.github.com/repos/huggingface/transformers/issues/19788/events
https://github.com/huggingface/transformers/issues/19788
1,418,211,162
I_kwDOCUB6oc5UiC9a
19,788
Add translation of docs to greek
{ "login": "ArchontisKostis", "id": 77233507, "node_id": "MDQ6VXNlcjc3MjMzNTA3", "avatar_url": "https://avatars.githubusercontent.com/u/77233507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArchontisKostis", "html_url": "https://github.com/ArchontisKostis", "followers_url": "https://api.github.com/users/ArchontisKostis/followers", "following_url": "https://api.github.com/users/ArchontisKostis/following{/other_user}", "gists_url": "https://api.github.com/users/ArchontisKostis/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArchontisKostis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArchontisKostis/subscriptions", "organizations_url": "https://api.github.com/users/ArchontisKostis/orgs", "repos_url": "https://api.github.com/users/ArchontisKostis/repos", "events_url": "https://api.github.com/users/ArchontisKostis/events{/privacy}", "received_events_url": "https://api.github.com/users/ArchontisKostis/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
null
NONE
null
I thought it would be awsome to add greek translation to the documentation I currently have translated only the index.mdx
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19788/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/19787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19787/comments
https://api.github.com/repos/huggingface/transformers/issues/19787/events
https://github.com/huggingface/transformers/pull/19787
1,418,208,613
PR_kwDOCUB6oc5BR5qs
19,787
Generate: contrastive search test updates
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
# What does this PR do? The newly introduced tests had a bunch of minor issues, including models too big for CI, formatting problems, or slightly incorrect strings (the hardcoded strings were generated for inputs that were changed before the final commit). This PR addresses these issues. All new (slow) tests passing locally now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19787", "html_url": "https://github.com/huggingface/transformers/pull/19787", "diff_url": "https://github.com/huggingface/transformers/pull/19787.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19787.patch", "merged_at": 1666375808000 }
https://api.github.com/repos/huggingface/transformers/issues/19786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19786/comments
https://api.github.com/repos/huggingface/transformers/issues/19786/events
https://github.com/huggingface/transformers/pull/19786
1,418,183,353
PR_kwDOCUB6oc5BR0SD
19,786
Fix CTRL `test_torchscrip_xxx` CI by updating `_create_and_check_torchscript`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Running all tests (per model separately, as in our CI) of `test_torchscript_xxx`, all pass" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Fix CTRL `test_torchscrip_xxx` CI by updating `_create_and_check_torchscript`. Before calling `torch.jit.trace`, we run the prepared inputs first. ### More context The PR #19681 puts `pos_encoding` attribute to the correct device for CTRL model, but this could be only done safely in the `forward` method. However, our current `torchscript` tests don't run the model with the prepared inputs before calling `torch.jit.trace`. And so we get the following error for `CTRL` after PR #19678: ```bash (line 535) torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ... ... Comparison exception: The values for attribute 'device' do not match: cpu != cuda:0. ``` See [CI report](https://github.com/huggingface/transformers/actions/runs/3286486761/jobs/5414682626)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19786/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19786", "html_url": "https://github.com/huggingface/transformers/pull/19786", "diff_url": "https://github.com/huggingface/transformers/pull/19786.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19786.patch", "merged_at": 1666362193000 }
https://api.github.com/repos/huggingface/transformers/issues/19785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19785/comments
https://api.github.com/repos/huggingface/transformers/issues/19785/events
https://github.com/huggingface/transformers/pull/19785
1,418,168,386
PR_kwDOCUB6oc5BRxHW
19,785
Update `ImageToTextPipelineTests.test_small_model_tf`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you so much for this @ydshieh !!", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks!" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? After PR #19732, I uploaded the correctly converted TF model to the Hub repo [hf-internal-testing/tiny-random-vit-gpt2](https://huggingface.co/hf-internal-testing/tiny-random-vit-gpt2/tree/main) This PR updates the expected values accordingly, which is the same values as for `test_small_model_pt`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19785/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19785/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19785", "html_url": "https://github.com/huggingface/transformers/pull/19785", "diff_url": "https://github.com/huggingface/transformers/pull/19785.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19785.patch", "merged_at": 1666355720000 }
https://api.github.com/repos/huggingface/transformers/issues/19784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19784/comments
https://api.github.com/repos/huggingface/transformers/issues/19784/events
https://github.com/huggingface/transformers/pull/19784
1,417,987,600
PR_kwDOCUB6oc5BRKun
19,784
Add Swin2SR
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Gently pinging @sgugger here" ]
1,666
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? Fixes #19568 and replaces #19667 This PR adds Swin2SR, a Swinv2-based model for image super resolution, compression and restoration. To do: - [x] finish `Swin2SRImageProcessor` - should incorporate padding - [x] fix integration test - [x] transfer checkpoints to the appropriate organization
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19784", "html_url": "https://github.com/huggingface/transformers/pull/19784", "diff_url": "https://github.com/huggingface/transformers/pull/19784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19784.patch", "merged_at": 1671204242000 }
https://api.github.com/repos/huggingface/transformers/issues/19783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19783/comments
https://api.github.com/repos/huggingface/transformers/issues/19783/events
https://github.com/huggingface/transformers/issues/19783
1,417,984,927
I_kwDOCUB6oc5UhLuf
19,783
Add XCiT Model
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,666
1,670
1,670
CONTRIBUTOR
null
### Model description Cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. [Paper](https://arxiv.org/abs/2106.09681) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official Implementation: https://github.com/facebookresearch/xcit Timm Implementation: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/xcit.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19783/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19783/timeline
completed
null
null