url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/19682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19682/comments
https://api.github.com/repos/huggingface/transformers/issues/19682/events
https://github.com/huggingface/transformers/pull/19682
1,411,863,484
PR_kwDOCUB6oc5A8fnX
19,682
add return_tensor parameter for feature extraction
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No- op ???", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19682). All of your documentation changes will be reflected on that endpoint." ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes #10016 Addresses stale issue #10016. Please review @LysandreJik and @Narsil. Thanks. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19682/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19682", "html_url": "https://github.com/huggingface/transformers/pull/19682", "diff_url": "https://github.com/huggingface/transformers/pull/19682.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19682.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19681/comments
https://api.github.com/repos/huggingface/transformers/issues/19681/events
https://github.com/huggingface/transformers/pull/19681
1,411,833,135
PR_kwDOCUB6oc5A8ZEv
19,681
fix some device issues for pt 1.13
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? `PyTorch 1.13` is coming. We have 2 models with a lot of test failures due to some tensor indexing with `indexed tensor` and `indices` on the different devices. (It works with torch <= 1.12.1 though). This PR fixes this device issue, so we are better prepared for torch 1.13.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19681", "html_url": "https://github.com/huggingface/transformers/pull/19681", "diff_url": "https://github.com/huggingface/transformers/pull/19681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19681.patch", "merged_at": 1666033059000 }
https://api.github.com/repos/huggingface/transformers/issues/19680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19680/comments
https://api.github.com/repos/huggingface/transformers/issues/19680/events
https://github.com/huggingface/transformers/pull/19680
1,411,826,269
PR_kwDOCUB6oc5A8XmL
19,680
Revert "add return_tensor parameter for feature extraction"
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19680). All of your documentation changes will be reflected on that endpoint." ]
1,666
1,666
1,666
COLLABORATOR
null
Reverts huggingface/transformers#19257
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19680/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19680", "html_url": "https://github.com/huggingface/transformers/pull/19680", "diff_url": "https://github.com/huggingface/transformers/pull/19680.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19680.patch", "merged_at": 1666022189000 }
https://api.github.com/repos/huggingface/transformers/issues/19679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19679/comments
https://api.github.com/repos/huggingface/transformers/issues/19679/events
https://github.com/huggingface/transformers/pull/19679
1,411,798,804
PR_kwDOCUB6oc5A8Rtm
19,679
Fix imports in pipeline tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? #19257 broke the pipeline tests on main, this PR fixes them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19679", "html_url": "https://github.com/huggingface/transformers/pull/19679", "diff_url": "https://github.com/huggingface/transformers/pull/19679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19679.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19678/comments
https://api.github.com/repos/huggingface/transformers/issues/19678/events
https://github.com/huggingface/transformers/pull/19678
1,411,676,286
PR_kwDOCUB6oc5A73ZV
19,678
:rotating_light: :rotating_light: :rotating_light: [Breaking change] Deformable DETR intermediate representations
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "```\r\n==================================================================================================== 73 passed, 8 skipped, 49 warnings in 98.91s (0:01:38) =====================================================================================================\r\n\r\n```\r\nIs that OK (I think the skipped are just unsupported features like resize_tokens)", "Ok!", "@sgugger Since this is breaking, I'll let you merge when it's more convenient.", "Let's go :-)" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? - Fixes naturally the `object-detection` pipeline. - Moves from `[n_decoders, batch_size, ...]` to `[batch_size, n_decoders, ...]` instead. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19678/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19678", "html_url": "https://github.com/huggingface/transformers/pull/19678", "diff_url": "https://github.com/huggingface/transformers/pull/19678.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19678.patch", "merged_at": 1666098040000 }
https://api.github.com/repos/huggingface/transformers/issues/19677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19677/comments
https://api.github.com/repos/huggingface/transformers/issues/19677/events
https://github.com/huggingface/transformers/pull/19677
1,411,651,018
PR_kwDOCUB6oc5A7x_G
19,677
object-detection instead of object_detection
{ "login": "Spacefish", "id": 375633, "node_id": "MDQ6VXNlcjM3NTYzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/375633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spacefish", "html_url": "https://github.com/Spacefish", "followers_url": "https://api.github.com/users/Spacefish/followers", "following_url": "https://api.github.com/users/Spacefish/following{/other_user}", "gists_url": "https://api.github.com/users/Spacefish/gists{/gist_id}", "starred_url": "https://api.github.com/users/Spacefish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Spacefish/subscriptions", "organizations_url": "https://api.github.com/users/Spacefish/orgs", "repos_url": "https://api.github.com/users/Spacefish/repos", "events_url": "https://api.github.com/users/Spacefish/events{/privacy}", "received_events_url": "https://api.github.com/users/Spacefish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
The Object detection Sample in the README.md does not work out of the box, as it tries to initialize a pipeline `object_detection` however it is called `object-detection` Not sure if this is intended such that a new user has to overcome a minimal sanity check?! 🤣 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19677/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19677", "html_url": "https://github.com/huggingface/transformers/pull/19677", "diff_url": "https://github.com/huggingface/transformers/pull/19677.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19677.patch", "merged_at": 1666018649000 }
https://api.github.com/repos/huggingface/transformers/issues/19676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19676/comments
https://api.github.com/repos/huggingface/transformers/issues/19676/events
https://github.com/huggingface/transformers/pull/19676
1,411,635,484
PR_kwDOCUB6oc5A7uqG
19,676
[TYPO] Update perf_train_gpu_one.mdx
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,692
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes typo. negligable -> negligible ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @osanseviero
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19676/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19676", "html_url": "https://github.com/huggingface/transformers/pull/19676", "diff_url": "https://github.com/huggingface/transformers/pull/19676.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19676.patch", "merged_at": 1666018475000 }
https://api.github.com/repos/huggingface/transformers/issues/19675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19675/comments
https://api.github.com/repos/huggingface/transformers/issues/19675/events
https://github.com/huggingface/transformers/pull/19675
1,411,629,841
PR_kwDOCUB6oc5A7tdK
19,675
Update ESM checkpoints to point to `facebook/`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
MEMBER
null
ESM checkpoints were initially uploaded to my account, but have now been moved to the correct location. This PR updates all references to point to their new home under `facebook/`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19675/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19675", "html_url": "https://github.com/huggingface/transformers/pull/19675", "diff_url": "https://github.com/huggingface/transformers/pull/19675.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19675.patch", "merged_at": 1666026565000 }
https://api.github.com/repos/huggingface/transformers/issues/19674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19674/comments
https://api.github.com/repos/huggingface/transformers/issues/19674/events
https://github.com/huggingface/transformers/pull/19674
1,411,619,899
PR_kwDOCUB6oc5A7rVs
19,674
Add decorator to flaky accelerate test
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Great! Feel free to merge and open an issue to track it :)", "I've opened an issue here: https://github.com/huggingface/transformers/issues/19733" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Adds `is_flaky` decorator to the `test_run_squad_no_trainer` test which occasionally fails on CI runs which are independent to the changes in the PR e.g.: * https://app.circleci.com/pipelines/github/huggingface/transformers/49621/workflows/14c25312-58a5-4b0b-8b41-6c5bec668043/jobs/593213 * https://app.circleci.com/pipelines/gh/huggingface/transformers/49224/workflows/fbae76ab-9259-4695-bb06-475357172587/jobs/589262 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19674/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19674", "html_url": "https://github.com/huggingface/transformers/pull/19674", "diff_url": "https://github.com/huggingface/transformers/pull/19674.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19674.patch", "merged_at": 1666115497000 }
https://api.github.com/repos/huggingface/transformers/issues/19673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19673/comments
https://api.github.com/repos/huggingface/transformers/issues/19673/events
https://github.com/huggingface/transformers/pull/19673
1,411,569,833
PR_kwDOCUB6oc5A7gjs
19,673
[WIP] Adding logprobs to `text-generation` pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks great. You may want to cap the logprob argument to a small-ish number (maybe 12?) to avoid giving the ability to people to generate massive amount of data from short queries. I think 12 is very fair and covers most use-cases.", "Thanks for the PR, but I am very much not in favor of this change. We have developed [tools](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub) so that users can put their preferred pipeline code in the repo of their model specifically for use cases like this one. I think those tools should be leveraged instead of changing the base pipeline.", "To provide some color, the lack of logprobs for the inference API text generation makes it hardly usable for any non trivial use case.\r\n\r\nI also tried deploying a model on inference endpoints and it was not obvious to me where to add code to extract the logprobs?", "From a TF standpoint, whatever works for PT should also work there -- their interface should be the same.\r\n\r\nRegarding the root issue, it seems that we have to decide whether we want to optimize the default pipelines for a simple interface or for the inference API. I'm in favor of the first, as we have a myriad of solutions for non-trivial use cases:\r\n1. as @sgugger mentioned, custom pipelines can be defined\r\n2. If the user is writing code, replacing the pipeline by a `.generate()` call is also simple (and unlocks several other outputs)\r\n3. if the issue is the GUI for non-trivial use cases, a custom Space can be built (we could create a template where only the model ID needs to be changed, if that would make things easier)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,666
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Given the discussions to add this parameter here is a first (non working) draft. - `text-generation` only, we would need to support `text2text-generation` too, which unfortunately is a different pipeline - This gets quite complex in the case of `num_return_sequences` + `num_beams` to keep track of all the dimensions. There seems to be something wrong in the `output_scores` + `num_beams` - This is IMO quite unaligned with purpose of pipelines. Pipelines are supposed to be used for naive non ML practitionners, so understanding what tokens are is beyong the purpose of the pipelines. I'm creating this PR, just because multiple discussions have been started asking for this feature. - The current code becomes very bloated IMO (and it's not finished to handle all the different params). `generate` scores already generated a lot of discussion: - https://github.com/huggingface/transformers/issues/17424 - https://github.com/huggingface/transformers/issues/18942 To be clear, it seems everyone involved in said discussions wants parity with GPT-3 OpenAI and CoHere. since the API is powered by the pipelines we should implement it here to have the functionnality within the API. https://huggingface.co/bigscience/bloom/discussions/89#6322b153a418a789a23f3380 Discussion for Bloom (currently custom code so not concerned, but still to be considered) Other discussions are internal/email but follow roughly the same pattern as far as I could read. The purpose of this PR is to gauge interest for this feature and/or aligning with other APIs. Please star this PR if you are interested in this feature. And please do comment if other/more important features are interesting to you. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @spolu @sgugger @gante (Maybe we have some other comments on how to handle TF too.) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19673/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19673", "html_url": "https://github.com/huggingface/transformers/pull/19673", "diff_url": "https://github.com/huggingface/transformers/pull/19673.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19673.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19672/comments
https://api.github.com/repos/huggingface/transformers/issues/19672/events
https://github.com/huggingface/transformers/issues/19672
1,411,557,150
I_kwDOCUB6oc5UIqce
19,672
'WhisperProcessor' object has no attribute 'as_target_processor'
{ "login": "GaetanBaert", "id": 47001815, "node_id": "MDQ6VXNlcjQ3MDAxODE1", "avatar_url": "https://avatars.githubusercontent.com/u/47001815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GaetanBaert", "html_url": "https://github.com/GaetanBaert", "followers_url": "https://api.github.com/users/GaetanBaert/followers", "following_url": "https://api.github.com/users/GaetanBaert/following{/other_user}", "gists_url": "https://api.github.com/users/GaetanBaert/gists{/gist_id}", "starred_url": "https://api.github.com/users/GaetanBaert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GaetanBaert/subscriptions", "organizations_url": "https://api.github.com/users/GaetanBaert/orgs", "repos_url": "https://api.github.com/users/GaetanBaert/repos", "events_url": "https://api.github.com/users/GaetanBaert/events{/privacy}", "received_events_url": "https://api.github.com/users/GaetanBaert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nWe'll update that code snippet as we've recently deprecated the use of `as_target_processor`, and new processors like `WhisperProcessor` don't implement it anymore. See #18325 for details.\r\n\r\nYou can replace\r\n\r\n```\r\ntext = \"hello world\"\r\n\r\nwith processor.as_target_processor():\r\n encoded_labels = processor(text, padding=True)\r\n```\r\nby \r\n```\r\nencoded_labels = processor(text=text).input_ids\r\n```", "Also cc'ing @ArthurZucker for updating the docs", "Hello, \r\nThank you for clarification !", "Another thing, it seems that the `pad` method does not exist for WhisperProcessor ?", "I think we can add it, similar to [this method](https://github.com/huggingface/transformers/blob/3b3024da70a7ada6599390c5b3e1a721c9a4aa4c/src/transformers/models/wav2vec2/processing_wav2vec2.py#L104-L132).\r\n\r\nFor now you can do `processor.tokenizer.pad(...) `", "Nice catch! Will update the doc soon \r\n" ]
1,666
1,666
1,666
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to train a Whisper model, using the WhisperProcessor. As written in the doc of the `__call__` method of the WhisperProcessor, I should have a context processor.as_target_processor() but it seems it doesn't exist. "If used in the context ~WhisperProcessor.as_target_processor this method forwards all its arguments to WhisperTokenizer’s [call()]" Steps to reproduce the bug : ``` target_text = 'this is a test text' processor = WhisperProcessor.from_pretrained('openai/whisper-large') with processor.as_target_processor(): targets = processor(target_text).input_ids ``` ### Expected behavior The `__call__` method of WhisperProcessor instance in the as_target_processor context should give the result of the WhisperTokenizer `__call__`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19672/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19671/comments
https://api.github.com/repos/huggingface/transformers/issues/19671/events
https://github.com/huggingface/transformers/pull/19671
1,411,517,657
PR_kwDOCUB6oc5A7VHZ
19,671
check decoder_inputs_embeds is None before shifting labels
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? This is related to #19157, which pointed out that a few models do not check whether `decoder_inputs_embeds` are None, which is inconsistent with other models. Since this is not the first time this is brought, let's solve it all at once, unless there is a particular reason ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19671", "html_url": "https://github.com/huggingface/transformers/pull/19671", "diff_url": "https://github.com/huggingface/transformers/pull/19671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19671.patch", "merged_at": 1666077253000 }
https://api.github.com/repos/huggingface/transformers/issues/19670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19670/comments
https://api.github.com/repos/huggingface/transformers/issues/19670/events
https://github.com/huggingface/transformers/pull/19670
1,411,515,143
PR_kwDOCUB6oc5A7UkQ
19,670
[WHISPER] fix tests after updating the max_length
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @amyeroberts for running the tests locally, both TF and PT are all green. ", "@ArthurZucker It would be nice to explain a bit the reason for the PR, not just what it did 🙏 . It will help others to understand , and it also helps to track (if we need to check this PR for some reason in the future)", "@sgugger \r\n\r\nThis is the Hub PR @ArthurZucker mentioned (change `max_length` to 448 in order to fix 2 whisper pipeline tests)\r\nhttps://huggingface.co/openai/whisper-large/commit/baca495426386f789702e7f10edccf761c5f5592\r\n\r\nAfter that change, this PR is required to pass some TF Whisper tests.\r\n\r\n" ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Fixes the `max_length` used in the generate function in whisper. The default `max_length` argument was changed in the `config.json` as it is more convenient for people who either don't know that the argument exist/don't know the value to set.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19670/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19670", "html_url": "https://github.com/huggingface/transformers/pull/19670", "diff_url": "https://github.com/huggingface/transformers/pull/19670.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19670.patch", "merged_at": 1666068348000 }
https://api.github.com/repos/huggingface/transformers/issues/19669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19669/comments
https://api.github.com/repos/huggingface/transformers/issues/19669/events
https://github.com/huggingface/transformers/pull/19669
1,411,482,676
PR_kwDOCUB6oc5A7Nc6
19,669
Fix code examples of DETR and YOLOS
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,666
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This is a follow-up PR of #19205. YOLOS and DETR share the same postprocessing, hence I've added post_process_object_detection to YOLOS, leveraging Copied from statements. It also improves the code example of DETR, and adds a better one for YOLOS.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19669", "html_url": "https://github.com/huggingface/transformers/pull/19669", "diff_url": "https://github.com/huggingface/transformers/pull/19669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19669.patch", "merged_at": 1666021702000 }
https://api.github.com/repos/huggingface/transformers/issues/19668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19668/comments
https://api.github.com/repos/huggingface/transformers/issues/19668/events
https://github.com/huggingface/transformers/pull/19668
1,411,449,035
PR_kwDOCUB6oc5A7GEA
19,668
fix test whisper with new max length
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you link the Hub PR that is related?\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "It seems the checkpoint involved is `openai/whisper-tiny.en` which is also used in other test methods. Could you confirm the other tests also pass after that Hub PR?", "The `pytorch` tests all pass, 3 `TF` tests fails for now. \r\nOnly the `config.max_length` was changed so should not have a lot of impact. " ]
1,666
1,666
1,666
COLLABORATOR
null
# What does this PR do? Fixes pipeline test after `max_length` update @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19668/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19668", "html_url": "https://github.com/huggingface/transformers/pull/19668", "diff_url": "https://github.com/huggingface/transformers/pull/19668.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19668.patch", "merged_at": 1666076197000 }
https://api.github.com/repos/huggingface/transformers/issues/19667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19667/comments
https://api.github.com/repos/huggingface/transformers/issues/19667/events
https://github.com/huggingface/transformers/pull/19667
1,411,355,727
PR_kwDOCUB6oc5A6x4y
19,667
Swin2sr
{ "login": "venkat-natchi", "id": 115526526, "node_id": "U_kgDOBuLLfg", "avatar_url": "https://avatars.githubusercontent.com/u/115526526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/venkat-natchi", "html_url": "https://github.com/venkat-natchi", "followers_url": "https://api.github.com/users/venkat-natchi/followers", "following_url": "https://api.github.com/users/venkat-natchi/following{/other_user}", "gists_url": "https://api.github.com/users/venkat-natchi/gists{/gist_id}", "starred_url": "https://api.github.com/users/venkat-natchi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/venkat-natchi/subscriptions", "organizations_url": "https://api.github.com/users/venkat-natchi/orgs", "repos_url": "https://api.github.com/users/venkat-natchi/repos", "events_url": "https://api.github.com/users/venkat-natchi/events{/privacy}", "received_events_url": "https://api.github.com/users/venkat-natchi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "WIP. \r\n\r\nFound the equivalents for PatchEmb, PatchMerging, SwinV2Stage", "Hi @venkat-natchi,\r\n\r\nI actually played around with the Swin2SR model this weekend and got a [working implementation](https://github.com/NielsRogge/transformers/tree/add_swin2sr/src/transformers/models/swin2sr) already.\r\n\r\nI see you're still at the start of the process, so would you be interested in working on another model [from this list](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+label%3A%22New+model%22)?\r\n\r\n", "Sure, make sense. I was actually going through the papers in the weekend to get the high level understanding of it. \r\n\r\nI thought of picking [this](https://github.com/huggingface/transformers/issues/19631) one up now, but seems to be closed. Do you know why?\r\n\r\nIf possible could you point me to any one model, which was not taken up so far. Thanks. \r\n\r\n\r\n", "Do you have an email address? I'll set up a Slack channel so we can discuss :)", "sure. Mine is venkatachalam.natchiappan@gmail.com", "Hey, @venkat-natchi, would you like to collaborate on Adding EDSR to HuggingFace? I closed that issue because I felt it was irrelevant.", "> Hey, @venkat-natchi, would you like to collaborate on Adding EDSR to HuggingFace? I closed that issue because I felt it was irrelevant.\r\n\r\nSure, I am interested. ", "That's great! then i will reopen that issue and tag u there", "I'll close this PR as you'll work on a new model." ]
1,666
1,666
1,666
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19568 ## Who can review? @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19667", "html_url": "https://github.com/huggingface/transformers/pull/19667", "diff_url": "https://github.com/huggingface/transformers/pull/19667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19667.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19666/comments
https://api.github.com/repos/huggingface/transformers/issues/19666/events
https://github.com/huggingface/transformers/issues/19666
1,411,260,081
I_kwDOCUB6oc5UHh6x
19,666
How to implement beam search on logits output from BART model?
{ "login": "ZiyueWangUoB", "id": 33383959, "node_id": "MDQ6VXNlcjMzMzgzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/33383959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZiyueWangUoB", "html_url": "https://github.com/ZiyueWangUoB", "followers_url": "https://api.github.com/users/ZiyueWangUoB/followers", "following_url": "https://api.github.com/users/ZiyueWangUoB/following{/other_user}", "gists_url": "https://api.github.com/users/ZiyueWangUoB/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZiyueWangUoB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZiyueWangUoB/subscriptions", "organizations_url": "https://api.github.com/users/ZiyueWangUoB/orgs", "repos_url": "https://api.github.com/users/ZiyueWangUoB/repos", "events_url": "https://api.github.com/users/ZiyueWangUoB/events{/privacy}", "received_events_url": "https://api.github.com/users/ZiyueWangUoB/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Gently pinging @lewtun, and I'm unknowledgeable about the intersection between ONNX and text generation :)\r\n\r\n@ZiyueWangUoB yeah, you can implement the basic version of greedy search as you described. Beam search is more complex, a good reference is the following [blog post](https://huggingface.co/blog/how-to-generate). But I suspect we already have a solution for ONNX models!", "@gante Yes I've read that article and understand the theory behind beam search. However I feel like I'm missing something with the onnx output, as the logits alone shouldn't be able to cover the beam search algorithm. ", "@ZiyueWangUoB There are several ways to kickstart beam search, but in all of them you have to do shenanigans at the start to obtain `N` (number of beams) sets of logits from a single input row.\r\nOption 1 - The first iteration is a normal greedy search where you keep the top_k (k=`N`) tokens\r\nOption 2 - You replicate your input `N` times, but set a large score penalty in all but the first row. You can use beam search from the first iteration.\r\n\r\nFrom there, run the usual beam search: obtain `[N, vocab_size]` logits, pick the top `N` based on the score. Following our code, especially our TF and JAX implementations, is great to understand everything that goes into making it right!\r\n\r\nNote: As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters (like these questions), we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗", "@gante how would I go about getting N sets of logits? The model only outputs a [1,804,50265] set, which I’m assuming is 1 set of logits.", "Thanks for the ping @gante !\r\n\r\n@ZiyueWangUoB we actually have a beam search example with BART + TorchScript from the ONNX team that you can inspect here: https://github.com/huggingface/transformers/blob/main/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py\r\n\r\ntl;dr implementing beam search from scratch is quite involved and essentially requires reimplementing large chunks of the `generate()` method we have in `transformers`\r\n\r\nIf ONNX Runtime is an option, an alternative would be to run the generation with our `optimum` lib:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, pipeline\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n\r\nmodel_id = \"facebook/bart-large-cnn\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(model_id, from_transformers=True)\r\nsummarizer = pipeline(\"summarization\", model=model, tokenizer=tokenizer)\r\n\r\ntext = \"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"\r\nsummarizer(text)\r\n```\r\n", "@lewtun Thanks for the input! I've been looking at that example, but the problem with directly using that script is the result means the length of the encoded text is fixed. I'll look into the torchscript beam search further, but as of right now that can't be directly used.\r\n\r\nAs for optimum, I will try that. I'm currently trying to convert the model into Tensorrt at the end, but if that's not reasonable I will use optimum. ", "> As for optimum, I will try that. I'm currently trying to convert the model into Tensorrt at the end, but if that's not reasonable I will use optimum.\r\n\r\nCool! We're hoping to integrate TensorRT as a backend in `optimum` when we get some bandwidth - in the meantime, you're more than welcome to open an issue/PR if you feel so inclined :)" ]
1,665
1,666
1,666
NONE
null
Hi, I've converted my BART-LARGE-CNN model to ONNX and now am trying to perform beam search then decoding to generate a summary. The output from logits layer is in the shape of [1,804,50265] which I'm assuming is [batch size, time step, vocab length]. I then perform softmax on each time step and log the result. If my understanding is correct, I can perform greedy search easily with this (at each time step, select the highest probability, then decode using vocab to find the summary). But how can I implement beam search? For example, given the choice of a word (i.e. "The) at time step 1, the probability of each word following that at time step 2 should be different depending on the word selected at time step 1. But in this case the logits are fixed for every different word at time step 2? Sorry I'm new to this, pretty confused. Any help would be appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19665/comments
https://api.github.com/repos/huggingface/transformers/issues/19665/events
https://github.com/huggingface/transformers/issues/19665
1,411,096,491
I_kwDOCUB6oc5UG5-r
19,665
My customized function compute_metrics doesn't work when i train the CLIP model
{ "login": "lchustc", "id": 27990344, "node_id": "MDQ6VXNlcjI3OTkwMzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/27990344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lchustc", "html_url": "https://github.com/lchustc", "followers_url": "https://api.github.com/users/lchustc/followers", "following_url": "https://api.github.com/users/lchustc/following{/other_user}", "gists_url": "https://api.github.com/users/lchustc/gists{/gist_id}", "starred_url": "https://api.github.com/users/lchustc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lchustc/subscriptions", "organizations_url": "https://api.github.com/users/lchustc/orgs", "repos_url": "https://api.github.com/users/lchustc/repos", "events_url": "https://api.github.com/users/lchustc/events{/privacy}", "received_events_url": "https://api.github.com/users/lchustc/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "@ydshieh\r\nCan you take a moment to help out? Thx a lot!", "Hi @lchwhut . Thank you for reporting. I will take a look", "Hi @lchwhut \r\n\r\nCould you check what you get as `all_preds` here\r\nhttps://github.com/huggingface/transformers/blob/b17a5e00749790895314ea33a4f156c918718dfe/src/transformers/trainer.py#L3071\r\n\r\nDoes it contain the loss value?", "Well, a second look, I think you can comment out this block\r\n\r\n```python\r\n # Metrics!\r\n if self.compute_metrics is not None and all_preds is not None and all_labels is not None:\r\n if args.include_inputs_for_metrics:\r\n metrics = self.compute_metrics(\r\n EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs)\r\n )\r\n else:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n else:\r\n metrics = {}\r\n```\r\nand put `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)`.\r\n\r\nPlease let me know if this helps, thank you!", "> Well, a second look, I think you can comment out this block\r\n> \r\n> ```python\r\n> # Metrics!\r\n> if self.compute_metrics is not None and all_preds is not None and all_labels is not None:\r\n> if args.include_inputs_for_metrics:\r\n> metrics = self.compute_metrics(\r\n> EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs)\r\n> )\r\n> else:\r\n> metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n> else:\r\n> metrics = {}\r\n> ```\r\n> \r\n> and put `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)`.\r\n> \r\n> Please let me know if this helps, thank you!\r\n\r\nCommenting out this block and putting `metrics = {}` before the line `metrics = denumpify_detensorize(metrics)` doesn't work. It will report error like this:\r\n![image](https://user-images.githubusercontent.com/27990344/196848990-107c81db-525a-4f6e-985e-f065907dbab1.png)\r\n\r\nActually, this way is equivalent to executing the `else` module because `all_labels ` is None. \r\nI checked the src code \r\n`loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)`\r\nThis step seems to return (None, logits, None) when the model is CLIP,because in this function, `has_labels=False`.\r\n", "Hi @lchwhut , could you check if the `inputs` at the line you mentioned have `\"return_loss\": True`, just like\r\nhttps://github.com/huggingface/transformers/blob/bbe2c8b126b04a3250e7089cf507ec62ce8d716c/examples/pytorch/contrastive-image-text/run_clip.py#L215\r\nIf it exists there and being `True`, the model itself should be able to return a loss value. We then have to check if `has_labels=False` will play a role not to return it back.", "> Hi @lchwhut , could you check if the `inputs` at the line you mentioned have `\"return_loss\": True`, just like\r\n> \r\n> https://github.com/huggingface/transformers/blob/bbe2c8b126b04a3250e7089cf507ec62ce8d716c/examples/pytorch/contrastive-image-text/run_clip.py#L215\r\n> \r\n> \r\n> If it exists there and being `True`, the model itself should be able to return a loss value. We then have to check if `has_labels=False` will play a role not to return it back.\r\n\r\nHi @ydshieh, i think the `inputs` does have `\"return_loss\": True` because the `inputs` is generated by dataloader which conducts function `collate_fn`.\r\n<img width=\"940\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197094616-779776fd-2ade-4ce2-95b5-5d9ec0e5d273.png\">\r\n`loss = None` will be specified if `have_label == False`\r\n<img width=\"833\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197096010-32dac0c7-57d9-4fd2-8232-146ecee848dd.png\">\r\nThe `logits` returned by `self.prediction_step` is the output of the CLIP which like:\r\n<img width=\"381\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197096335-d91dfc31-8f61-41b1-8432-1b34de556cad.png\">\r\nThe type of `text_model_output` and `vision_model_output` is not Tensor which causes(i think) the following error:\r\n<img width=\"612\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197096740-eb542f27-6894-4e30-962a-e0fd661fd768.png\">\r\n", "OK @lchwhut Thank you for all of these checks. I will figure it out a way.", "Hi @lchwhut \r\n\r\nSince the commit [3951b9f39](https://github.com/huggingface/transformers/commit/3951b9f3908bfa30be7fd814cd2ad1039d3162d8) (PR #16526), we can get the evaluation loss. Could you try with the latest version? Thanks.\r\n\r\n(You can try with a tiny dummy training)\r\n\r\n```bash\r\n***** train metrics *****\r\n epoch = 1.0\r\n train_loss = 1.3123\r\n train_runtime = 0:00:08.53\r\n train_samples_per_second = 1.876\r\n train_steps_per_second = 0.938\r\n[INFO|trainer.py:2412] 2022-10-22 11:28:29,243 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2414] 2022-10-22 11:28:29,243 >> Num examples = 16\r\n[INFO|trainer.py:2417] 2022-10-22 11:28:29,243 >> Batch size = 2\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 5.62it/s]\r\n***** eval metrics *****\r\n epoch = 1.0\r\n eval_loss = 0.8159\r\n eval_runtime = 0:00:01.63\r\n eval_samples_per_second = 9.806\r\n eval_steps_per_second = 4.903\r\n```\r\n\r\nDummy training like\r\n```bash\r\npython ./run_clip.py \\\r\n --output_dir ./outputs \\\r\n --model_name_or_path ./clip-roberta \\\r\n --data_dir $PWD/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train \\\r\n --do_eval \\\r\n --num_train_epochs 1 \\\r\n --max_steps 8 \\\r\n --max_train_samples 16 \\\r\n --max_eval_samples 16 \\\r\n --per_device_train_batch_size 2 \\\r\n --per_device_eval_batch_size 2 \\\r\n --learning_rate=\"5e-5\" \\\r\n --warmup_steps=\"0\" \\\r\n --weight_decay 0.1 \\\r\n --overwrite_output_dir \\\r\n```", "> Hi @lchwhut\r\n> \r\n> Since the commit [3951b9f39](https://github.com/huggingface/transformers/commit/3951b9f3908bfa30be7fd814cd2ad1039d3162d8) (PR #16526), we can get the evaluation loss. Could you try with the latest version? Thanks.\r\n> \r\n> (You can try with a tiny dummy training)\r\n> \r\n> ```shell\r\n> ***** train metrics *****\r\n> epoch = 1.0\r\n> train_loss = 1.3123\r\n> train_runtime = 0:00:08.53\r\n> train_samples_per_second = 1.876\r\n> train_steps_per_second = 0.938\r\n> [INFO|trainer.py:2412] 2022-10-22 11:28:29,243 >> ***** Running Evaluation *****\r\n> [INFO|trainer.py:2414] 2022-10-22 11:28:29,243 >> Num examples = 16\r\n> [INFO|trainer.py:2417] 2022-10-22 11:28:29,243 >> Batch size = 2\r\n> 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 5.62it/s]\r\n> ***** eval metrics *****\r\n> epoch = 1.0\r\n> eval_loss = 0.8159\r\n> eval_runtime = 0:00:01.63\r\n> eval_samples_per_second = 9.806\r\n> eval_steps_per_second = 4.903\r\n> ```\r\n> \r\n> Dummy training like\r\n> \r\n> ```shell\r\n> python ./run_clip.py \\\r\n> --output_dir ./outputs \\\r\n> --model_name_or_path ./clip-roberta \\\r\n> --data_dir $PWD/data \\\r\n> --dataset_name ydshieh/coco_dataset_script \\\r\n> --dataset_config_name=2017 \\\r\n> --image_column image_path \\\r\n> --caption_column caption \\\r\n> --remove_unused_columns=False \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --num_train_epochs 1 \\\r\n> --max_steps 8 \\\r\n> --max_train_samples 16 \\\r\n> --max_eval_samples 16 \\\r\n> --per_device_train_batch_size 2 \\\r\n> --per_device_eval_batch_size 2 \\\r\n> --learning_rate=\"5e-5\" \\\r\n> --warmup_steps=\"0\" \\\r\n> --weight_decay 0.1 \\\r\n> --overwrite_output_dir \\\r\n> ```\r\n\r\nHi @ydshieh\r\nThanks for your help!\r\nI updated `transformers==4.17.0` to `transformers==4.21.3` and could get the evaluation loss now!\r\n<img width=\"1235\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197668516-35e48897-0728-40bf-9179-80862ddd4941.png\">\r\nBut the error `'BaseModelOutputWithPoolingAndCrossAttentions' object has no attribute 'detach'` will still reproduce if i customize `compute_metrics`.\r\nI check the code and find if don't customize `compute_metrics`, `prediction_loss_only=None` will be set and thus the `prediction_step` will return `(loss, None, None)` (just skip `nested_detach`)\r\n<img width=\"850\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197669277-debf27d2-306e-43cb-a5f6-84ca95a4befe.png\">\r\n<img width=\"442\" alt=\"image\" src=\"https://user-images.githubusercontent.com/27990344/197668702-0506b9bd-6c73-453e-84b8-e9b1c5632773.png\">\r\nI think it doesn't matter because the `eval_loss` is enough for me. I can remove `text_model_output` and `vision_model_output` from `CLIPOutput` if i need customized compute_metrics to evaluate.\r\nThank you very much!\r\n", "Hi @lchwhut Yeah, `CLIP` is indeed special in terms of the output format :-) The `Trainer` class is designed to work with the most common use cases, but not a one-size-fits-all solution. Sometimes we need more customization\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,668
1,668
NONE
null
### System Info **environment** transformers==4.17.0 **hopes** I would like to know how to output evaluation metrics when i train the CLIP? I want to get `eval_loss` the way I get `train_loss`. Could anyone help me? **details** I have trained the CLIP demo(with the original training script `run_clip.py`: https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) successfully and obtained the following logs: <img width="1243" alt="image" src="https://user-images.githubusercontent.com/27990344/196113496-3582dd41-32bf-47c5-939c-289b5d8fcef6.png"> There seems to be no output evaluation metrics(such as `loss` or `acc`) after i specify `--do_eval ` before training. So i specified `compute_metrics=compute_metrics` in `Trainer` and got errors when the CLIP do evaluation <img width="737" alt="image" src="https://user-images.githubusercontent.com/27990344/196118789-3587ea57-1654-4075-a7ef-33f43505f15b.png"> <img width="1238" alt="image" src="https://user-images.githubusercontent.com/27990344/196117202-106ee438-254e-492f-a6ea-c332d9a4b7b4.png"> Except the above errors, i also checked the src code(`line 3052`, https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) and found that my customized function `compute_metrics` doesn't work when `all_labels is None` <img width="855" alt="image" src="https://user-images.githubusercontent.com/27990344/196119730-ab1ae85d-324b-4580-bf64-ff322ee2a11e.png"> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run original training script `run_clip.py` https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text ### Expected behavior I would like to know how to output evaluation metrics when i train the CLIP? I want to get `eval_loss` the way I get `train_loss`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19665/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19664/comments
https://api.github.com/repos/huggingface/transformers/issues/19664/events
https://github.com/huggingface/transformers/pull/19664
1,410,930,618
PR_kwDOCUB6oc5A5WP3
19,664
Update README.md
{ "login": "shreem-123", "id": 96364929, "node_id": "U_kgDOBb5pgQ", "avatar_url": "https://avatars.githubusercontent.com/u/96364929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shreem-123", "html_url": "https://github.com/shreem-123", "followers_url": "https://api.github.com/users/shreem-123/followers", "following_url": "https://api.github.com/users/shreem-123/following{/other_user}", "gists_url": "https://api.github.com/users/shreem-123/gists{/gist_id}", "starred_url": "https://api.github.com/users/shreem-123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shreem-123/subscriptions", "organizations_url": "https://api.github.com/users/shreem-123/orgs", "repos_url": "https://api.github.com/users/shreem-123/repos", "events_url": "https://api.github.com/users/shreem-123/events{/privacy}", "received_events_url": "https://api.github.com/users/shreem-123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR but We will keep the current wording." ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19664", "html_url": "https://github.com/huggingface/transformers/pull/19664", "diff_url": "https://github.com/huggingface/transformers/pull/19664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19664.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19663/comments
https://api.github.com/repos/huggingface/transformers/issues/19663/events
https://github.com/huggingface/transformers/pull/19663
1,410,674,727
PR_kwDOCUB6oc5A4e5G
19,663
Add pillow to layoutlmv3 example requirements.txt
{ "login": "Spacefish", "id": 375633, "node_id": "MDQ6VXNlcjM3NTYzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/375633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spacefish", "html_url": "https://github.com/Spacefish", "followers_url": "https://api.github.com/users/Spacefish/followers", "following_url": "https://api.github.com/users/Spacefish/following{/other_user}", "gists_url": "https://api.github.com/users/Spacefish/gists{/gist_id}", "starred_url": "https://api.github.com/users/Spacefish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Spacefish/subscriptions", "organizations_url": "https://api.github.com/users/Spacefish/orgs", "repos_url": "https://api.github.com/users/Spacefish/repos", "events_url": "https://api.github.com/users/Spacefish/events{/privacy}", "received_events_url": "https://api.github.com/users/Spacefish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
Adds required pillow / PIL library to the layoutlmv3 training example, as it´s used to load the images during training. @sgugger, @patil-suraj tagging you as i don´t know who is reponsible for that example.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19663", "html_url": "https://github.com/huggingface/transformers/pull/19663", "diff_url": "https://github.com/huggingface/transformers/pull/19663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19663.patch", "merged_at": 1666010517000 }
https://api.github.com/repos/huggingface/transformers/issues/19662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19662/comments
https://api.github.com/repos/huggingface/transformers/issues/19662/events
https://github.com/huggingface/transformers/pull/19662
1,410,607,768
PR_kwDOCUB6oc5A4RU5
19,662
word replacement line #231
{ "login": "shreem-123", "id": 96364929, "node_id": "U_kgDOBb5pgQ", "avatar_url": "https://avatars.githubusercontent.com/u/96364929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shreem-123", "html_url": "https://github.com/shreem-123", "followers_url": "https://api.github.com/users/shreem-123/followers", "following_url": "https://api.github.com/users/shreem-123/following{/other_user}", "gists_url": "https://api.github.com/users/shreem-123/gists{/gist_id}", "starred_url": "https://api.github.com/users/shreem-123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shreem-123/subscriptions", "organizations_url": "https://api.github.com/users/shreem-123/orgs", "repos_url": "https://api.github.com/users/shreem-123/repos", "events_url": "https://api.github.com/users/shreem-123/events{/privacy}", "received_events_url": "https://api.github.com/users/shreem-123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
install->installation # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19662", "html_url": "https://github.com/huggingface/transformers/pull/19662", "diff_url": "https://github.com/huggingface/transformers/pull/19662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19662.patch", "merged_at": 1666017635000 }
https://api.github.com/repos/huggingface/transformers/issues/19661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19661/comments
https://api.github.com/repos/huggingface/transformers/issues/19661/events
https://github.com/huggingface/transformers/pull/19661
1,410,606,810
PR_kwDOCUB6oc5A4RIW
19,661
grammatical error line #218
{ "login": "shreem-123", "id": 96364929, "node_id": "U_kgDOBb5pgQ", "avatar_url": "https://avatars.githubusercontent.com/u/96364929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shreem-123", "html_url": "https://github.com/shreem-123", "followers_url": "https://api.github.com/users/shreem-123/followers", "following_url": "https://api.github.com/users/shreem-123/following{/other_user}", "gists_url": "https://api.github.com/users/shreem-123/gists{/gist_id}", "starred_url": "https://api.github.com/users/shreem-123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shreem-123/subscriptions", "organizations_url": "https://api.github.com/users/shreem-123/orgs", "repos_url": "https://api.github.com/users/shreem-123/repos", "events_url": "https://api.github.com/users/shreem-123/events{/privacy}", "received_events_url": "https://api.github.com/users/shreem-123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR, but the sentence is correct as it is. Your proposed modification changes the meaning." ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19661", "html_url": "https://github.com/huggingface/transformers/pull/19661", "diff_url": "https://github.com/huggingface/transformers/pull/19661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19661.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19660/comments
https://api.github.com/repos/huggingface/transformers/issues/19660/events
https://github.com/huggingface/transformers/issues/19660
1,410,573,478
I_kwDOCUB6oc5UE6Sm
19,660
adding functionality to iterate pipelines over sentence pairs when using dataset
{ "login": "rohit1998", "id": 18055780, "node_id": "MDQ6VXNlcjE4MDU1Nzgw", "avatar_url": "https://avatars.githubusercontent.com/u/18055780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohit1998", "html_url": "https://github.com/rohit1998", "followers_url": "https://api.github.com/users/rohit1998/followers", "following_url": "https://api.github.com/users/rohit1998/following{/other_user}", "gists_url": "https://api.github.com/users/rohit1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohit1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohit1998/subscriptions", "organizations_url": "https://api.github.com/users/rohit1998/orgs", "repos_url": "https://api.github.com/users/rohit1998/repos", "events_url": "https://api.github.com/users/rohit1998/events{/privacy}", "received_events_url": "https://api.github.com/users/rohit1998/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "Hi @rohit1998 ,\r\n\r\nSounds like a good addition.\r\nIn general I think it's good if users understand how to create their own, but `KeyPairDataset` should fit quite nicely !", "Sure, I will try to add it and tag you to pr soon", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Wouldn't it be worth to add some generalisation like:\r\n```python \r\nclass KeyPairDataset(Dataset):\r\n def __init__(self, dataset: Dataset, key1: str, key2: str):\r\n self.dataset = dataset\r\n self.key1 = key1\r\n self.key2 = key2\r\n\r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n def __getitem__(self, i):\r\n return {self.key1:self.dataset[i][self.key1],self.key2:self.dataset[i][self.key2]}\r\n```", "I believe (correct me if i am wrong), but `text` and `text_pair` are conventions used in transformers library for two sentence case. If yes, making it more general would require changes in pipeline api." ]
1,665
1,678
1,669
CONTRIBUTOR
null
### Feature request to iterate on a dataset using pipelines, doc mentions this sample, [source](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipeline-batching) ``` import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` Pipeline supports sending a sentence pair as a dict. [source](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_classification.py#L112) ``` pipe = pipeline('text-classification') out = pipe( {'text':'I like you', 'text_pair':'. I love you'}) ``` But there is no way to get pair of sentence using huggingface datasets. We can do this by simply adding a KeyPairDataset in [pt_utils](https://github.com/huggingface/transformers/blob/2ef774211733f0acf8d3415f9284c49ef219e991/src/transformers/pipelines/pt_utils.py). Something like. ``` class KeyPairDataset(Dataset): def __init__(self, dataset: Dataset, key1: str, key2: str): self.dataset = dataset self.key1 = key1 self.key2 = key2 def __len__(self): return len(self.dataset) def __getitem__(self, i): return {'text':self.dataset[i][self.key1],'text_pair':self.dataset[i][self.key2]} ``` And then inference this using ``` dataset = Dataset.from_pandas(dataset_df[['sentence1', 'sentence2']]) pipe = pipeline('text-classification', model=args.input_path_model, device=0, num_workers=4) result = list(tqdm(pipe(KeyPairDataset(dataset, 'sentence1', 'sentence2'), batch_size=32), total=len(dataset))) ``` ### Motivation I am working on models that take sentence pair as input. Just something that could make my life easier instead of making my own datasets and data loaders or overriding the preprocesses function of pipeline datasets. ### Your contribution I can submit the pr.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19660/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19659/comments
https://api.github.com/repos/huggingface/transformers/issues/19659/events
https://github.com/huggingface/transformers/pull/19659
1,410,561,794
PR_kwDOCUB6oc5A4IWU
19,659
adding functionality to iterate pipelines over sentence pairs when using dataset
{ "login": "rohit1998", "id": 18055780, "node_id": "MDQ6VXNlcjE4MDU1Nzgw", "avatar_url": "https://avatars.githubusercontent.com/u/18055780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohit1998", "html_url": "https://github.com/rohit1998", "followers_url": "https://api.github.com/users/rohit1998/followers", "following_url": "https://api.github.com/users/rohit1998/following{/other_user}", "gists_url": "https://api.github.com/users/rohit1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohit1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohit1998/subscriptions", "organizations_url": "https://api.github.com/users/rohit1998/orgs", "repos_url": "https://api.github.com/users/rohit1998/repos", "events_url": "https://api.github.com/users/rohit1998/events{/privacy}", "received_events_url": "https://api.github.com/users/rohit1998/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19659). All of your documentation changes will be reflected on that endpoint." ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? This PR adds a new torch data set class to pt_utils.py that helps with using pipelines and datasets for sentence pair tasks. Usage can be simply like ``` dataset = Dataset.from_pandas(dataset_df[['sentence1', 'sentence2']]) pipe = pipeline('text-classification', model=args.input_path_model, device=0, num_workers=4) result = list(tqdm(pipe(KeyPairDataset(dataset, 'sentence1', 'sentence2'), batch_size=32), total=len(dataset))) ``` @LysandreJik please have a look. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19659", "html_url": "https://github.com/huggingface/transformers/pull/19659", "diff_url": "https://github.com/huggingface/transformers/pull/19659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19659.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19658/comments
https://api.github.com/repos/huggingface/transformers/issues/19658/events
https://github.com/huggingface/transformers/pull/19658
1,410,521,578
PR_kwDOCUB6oc5A4AsO
19,658
[Doctest] Add `configuration_trocr.py`
{ "login": "thliang01", "id": 21286104, "node_id": "MDQ6VXNlcjIxMjg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thliang01", "html_url": "https://github.com/thliang01", "followers_url": "https://api.github.com/users/thliang01/followers", "following_url": "https://api.github.com/users/thliang01/following{/other_user}", "gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}", "starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thliang01/subscriptions", "organizations_url": "https://api.github.com/users/thliang01/orgs", "repos_url": "https://api.github.com/users/thliang01/repos", "events_url": "https://api.github.com/users/thliang01/events{/privacy}", "received_events_url": "https://api.github.com/users/thliang01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
* trocr Config for doctest * ran make style # What does this PR do? Add `configuration_trocr.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19658", "html_url": "https://github.com/huggingface/transformers/pull/19658", "diff_url": "https://github.com/huggingface/transformers/pull/19658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19658.patch", "merged_at": 1665996817000 }
https://api.github.com/repos/huggingface/transformers/issues/19657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19657/comments
https://api.github.com/repos/huggingface/transformers/issues/19657/events
https://github.com/huggingface/transformers/pull/19657
1,410,519,137
PR_kwDOCUB6oc5A4AOT
19,657
Fix pipeline predict transform methods
{ "login": "s-udhaya", "id": 2215597, "node_id": "MDQ6VXNlcjIyMTU1OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2215597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-udhaya", "html_url": "https://github.com/s-udhaya", "followers_url": "https://api.github.com/users/s-udhaya/followers", "following_url": "https://api.github.com/users/s-udhaya/following{/other_user}", "gists_url": "https://api.github.com/users/s-udhaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-udhaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-udhaya/subscriptions", "organizations_url": "https://api.github.com/users/s-udhaya/orgs", "repos_url": "https://api.github.com/users/s-udhaya/repos", "events_url": "https://api.github.com/users/s-udhaya/events{/privacy}", "received_events_url": "https://api.github.com/users/s-udhaya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR fixes pipeline's predict and transform methods which are wrapper around ` __call__` of pipeline class. `__call__ `requires one positional argument, However these wrapper methods pass the incoming argument as a keyword argument to `__call__` method which leads to failure. Hence in this commit the keyword argument is modified to positional argument and basic tests are added to make sure this does not break again in future. Fixes #19289 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19657", "html_url": "https://github.com/huggingface/transformers/pull/19657", "diff_url": "https://github.com/huggingface/transformers/pull/19657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19657.patch", "merged_at": 1666011980000 }
https://api.github.com/repos/huggingface/transformers/issues/19656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19656/comments
https://api.github.com/repos/huggingface/transformers/issues/19656/events
https://github.com/huggingface/transformers/issues/19656
1,410,443,048
I_kwDOCUB6oc5UEaco
19,656
Pooled output from DeBERTa(v2) model
{ "login": "IgorPidik", "id": 10901373, "node_id": "MDQ6VXNlcjEwOTAxMzcz", "avatar_url": "https://avatars.githubusercontent.com/u/10901373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IgorPidik", "html_url": "https://github.com/IgorPidik", "followers_url": "https://api.github.com/users/IgorPidik/followers", "following_url": "https://api.github.com/users/IgorPidik/following{/other_user}", "gists_url": "https://api.github.com/users/IgorPidik/gists{/gist_id}", "starred_url": "https://api.github.com/users/IgorPidik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IgorPidik/subscriptions", "organizations_url": "https://api.github.com/users/IgorPidik/orgs", "repos_url": "https://api.github.com/users/IgorPidik/repos", "events_url": "https://api.github.com/users/IgorPidik/events{/privacy}", "received_events_url": "https://api.github.com/users/IgorPidik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your suggestion! This will sadly be a breaking change as it will change the way the weights are named, and thus break compatibility with all DeBERTa models on the hub. So even if your suggestion makes a ton of sense, I don't think we will be able to implement it.", "@sgugger thank you for your reply! I see why the implementation of the heads should not be altered, nevertheless, wouldn't it still make sense to at least add the pooler to the base model (the deberta model without any head)? Motivation being the first and the last points from the list above.\r\n\r\n", "We have plenty of other models (like ELECTRA) where the pooler is not implemented. As a general rule of thumb, it depends on whether it was in the pretraining objective or not.", "I understand and thank you! I will close the issue then." ]
1,665
1,666
1,666
NONE
null
### Feature request Add a context pooler and include the `pooler_output` in the result of the `forward` method of the `(TF)Deberta(V2)Model`. Changes required: * introduce `add_pooling_layer` flag in the constructor * add pooler * change return type from `BaseModelOutput` to `BaseModelOutputWithPooling` * update heads which currently do their own context pooling ### Motivation * Simplification of use cases where the model is used to generate context embeddings (whether the embeddings being the end goal or only used as an input to additional layers in a custom model) * Currently multiple heads require the pooled output and hence it is re-implemented in multiple places. The complexity of the heads and the redundancy can be decreased by moving this operation to the base model * Consistency with BERT and RoBERTa implementations which already operate in this fashion. Additionally, this consistency can simplify use cases where the user works with context embeddings and wants to experiment with different models. E.g.: BERT and RoBERTa can be easily exchanged. On the other hand, swapping in DeBERTa requires the user to handle the context pooler ### Your contribution I implemented the necessary changes in my fork and can submit a PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19656/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19655/comments
https://api.github.com/repos/huggingface/transformers/issues/19655/events
https://github.com/huggingface/transformers/pull/19655
1,410,385,402
PR_kwDOCUB6oc5A3m-7
19,655
Removed Bert interdependency from Funnel transformer
{ "login": "mukesh663", "id": 75234968, "node_id": "MDQ6VXNlcjc1MjM0OTY4", "avatar_url": "https://avatars.githubusercontent.com/u/75234968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mukesh663", "html_url": "https://github.com/mukesh663", "followers_url": "https://api.github.com/users/mukesh663/followers", "following_url": "https://api.github.com/users/mukesh663/following{/other_user}", "gists_url": "https://api.github.com/users/mukesh663/gists{/gist_id}", "starred_url": "https://api.github.com/users/mukesh663/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mukesh663/subscriptions", "organizations_url": "https://api.github.com/users/mukesh663/orgs", "repos_url": "https://api.github.com/users/mukesh663/repos", "events_url": "https://api.github.com/users/mukesh663/events{/privacy}", "received_events_url": "https://api.github.com/users/mukesh663/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> You just need to remove those `:obj:` :-)\r\n\r\nMade all the necessary changes. Thanks for looking into it." ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Hi @sgugger, Fixes #19303 - The `BertTokenizer` dependency has been removed from `FunnelTokenizer` - The `BertTokenizerFast` dependency has been removed from `FunnelTokenizerFast` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19655", "html_url": "https://github.com/huggingface/transformers/pull/19655", "diff_url": "https://github.com/huggingface/transformers/pull/19655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19655.patch", "merged_at": 1666015451000 }
https://api.github.com/repos/huggingface/transformers/issues/19654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19654/comments
https://api.github.com/repos/huggingface/transformers/issues/19654/events
https://github.com/huggingface/transformers/pull/19654
1,410,227,955
PR_kwDOCUB6oc5A3JnJ
19,654
Clean up deprecation warnings
{ "login": "Davidy22", "id": 872968, "node_id": "MDQ6VXNlcjg3Mjk2OA==", "avatar_url": "https://avatars.githubusercontent.com/u/872968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Davidy22", "html_url": "https://github.com/Davidy22", "followers_url": "https://api.github.com/users/Davidy22/followers", "following_url": "https://api.github.com/users/Davidy22/following{/other_user}", "gists_url": "https://api.github.com/users/Davidy22/gists{/gist_id}", "starred_url": "https://api.github.com/users/Davidy22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Davidy22/subscriptions", "organizations_url": "https://api.github.com/users/Davidy22/orgs", "repos_url": "https://api.github.com/users/Davidy22/repos", "events_url": "https://api.github.com/users/Davidy22/events{/privacy}", "received_events_url": "https://api.github.com/users/Davidy22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Davidy22 Thanks for opening this PR and all of the fixes! 💪 \r\n\r\nJust taking a look now. It would really helpful as a reviewer, and for future reference if you could give a quick summary of the deprecation warnings you tackled.\r\n\r\nAs a quick note regarding `PIL.Image.XXX` -> `PIL.Image.Resampling.XXX`, these changes might be breaking (c.f. [open transformers issue](https://github.com/huggingface/transformers/issues/19569) and [related PR in diffusers](https://github.com/huggingface/diffusers/pull/788) as it requires `Pillow>=9.1.0` which is not currently enforced. @sgugger What is the typical strategy for handling dependancy version updates? ", "Thanks for pointing this out Amy! In this case, Pillow 9.1.0 is too recent to be pinned as a minimum version (our general rule of thumb is to provide a support for 2 years and this is only from April). So we will need to move those objects to our `image_utils.py` where we can do some checks like:\r\n```py\r\nif version.parse(version.parse(PIL.__version__).base_version) >= version.parse(\"9.1.0\"):\r\n # Define enum using `PIL.Image.Resampling`\r\nelse:\r\n # Define enum using `PIL.Image`\r\n```\r\n\r\nAs for the actual enum, we could name it `PILImageResampling`?", "Oh whoops I probably should have taken a couple more notes, I only wrote down a couple of things that stood out as things that'd functionally change something. Summary from re-skimming through the listed changes:\r\n\r\n- np types swapped with equivalent python default types\r\n- PIL Resampling options switched to the new recommended location in the PIL library\r\n- Some strings with non-python escapes changed to r strings.\r\n- dict_type -> dict parameter in dictionary.Dictionary\r\n- Usages of past changed to past_key_values\r\n- topk -> top_k", "Added PILImageResampling, dealt with some funky import issues in one file that weren't happening in any of the other files, don't know why specifically the flava test file would have issues importing from image_utils, hoping it's not some thing that actually also happens in other files but doesn't get surfaced because it's not covered or something" ]
1,665
1,666
1,666
CONTRIBUTOR
null
The deprecation spring cleaning mentioned in #19371 Notes: Changed some strings in tests to raw strings, which will change the literal content of the strings as they are fed into whatever machine handles them. Test cases for past in the past/past_key_values switch changed/removed due to warning of impending removal Most of the warnings defined and thrown by transformers functions were left alone # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19654/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19654", "html_url": "https://github.com/huggingface/transformers/pull/19654", "diff_url": "https://github.com/huggingface/transformers/pull/19654.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19654.patch", "merged_at": 1666114488000 }
https://api.github.com/repos/huggingface/transformers/issues/19653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19653/comments
https://api.github.com/repos/huggingface/transformers/issues/19653/events
https://github.com/huggingface/transformers/issues/19653
1,410,224,664
I_kwDOCUB6oc5UDlIY
19,653
TFBartForSequenceClassification
{ "login": "uglyboxer", "id": 12128540, "node_id": "MDQ6VXNlcjEyMTI4NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/12128540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uglyboxer", "html_url": "https://github.com/uglyboxer", "followers_url": "https://api.github.com/users/uglyboxer/followers", "following_url": "https://api.github.com/users/uglyboxer/following{/other_user}", "gists_url": "https://api.github.com/users/uglyboxer/gists{/gist_id}", "starred_url": "https://api.github.com/users/uglyboxer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uglyboxer/subscriptions", "organizations_url": "https://api.github.com/users/uglyboxer/orgs", "repos_url": "https://api.github.com/users/uglyboxer/repos", "events_url": "https://api.github.com/users/uglyboxer/events{/privacy}", "received_events_url": "https://api.github.com/users/uglyboxer/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "can you explain it in detail.", "I have been working on a project for zero-shot classification using `facebook/bart-large-mnli`. After an initial prototype using the transformers pipeline I began to work with the underlying Bart model directly. As our project is in Tensorflow, I soon noticed there was no equivalent of `class BartForSequenceClassification` on the TF side. \r\n\r\nI have stitched it together: adding the classification head and loading the weights from the torch.bin file. If you think it would be useful, I could put up a PR add it to https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_tf_bart.py along side of the other TF ports for Bart.", "Hey @uglyboxer 👋 It is indeed missing. If you have a working implementation, we'd be interested in adding it to our library!", "Cool. I'll put something together this week." ]
1,665
1,675
1,675
CONTRIBUTOR
null
The Tensorflow version of `BartForSequenceClassification` seems to be missing. I've been putting a port together for another use case. Any interest in adding it to the repo?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19653/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19652/comments
https://api.github.com/repos/huggingface/transformers/issues/19652/events
https://github.com/huggingface/transformers/pull/19652
1,410,214,463
PR_kwDOCUB6oc5A3HI4
19,652
[Doctest] Add configuration_trocr.py
{ "login": "thliang01", "id": 21286104, "node_id": "MDQ6VXNlcjIxMjg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thliang01", "html_url": "https://github.com/thliang01", "followers_url": "https://api.github.com/users/thliang01/followers", "following_url": "https://api.github.com/users/thliang01/following{/other_user}", "gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}", "starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thliang01/subscriptions", "organizations_url": "https://api.github.com/users/thliang01/orgs", "repos_url": "https://api.github.com/users/thliang01/repos", "events_url": "https://api.github.com/users/thliang01/events{/privacy}", "received_events_url": "https://api.github.com/users/thliang01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Add `configuration_trocr.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19652", "html_url": "https://github.com/huggingface/transformers/pull/19652", "diff_url": "https://github.com/huggingface/transformers/pull/19652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19652.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19651/comments
https://api.github.com/repos/huggingface/transformers/issues/19651/events
https://github.com/huggingface/transformers/pull/19651
1,410,210,316
PR_kwDOCUB6oc5A3GY4
19,651
[Doctest] Add configuration_transfo_xl.py
{ "login": "thliang01", "id": 21286104, "node_id": "MDQ6VXNlcjIxMjg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thliang01", "html_url": "https://github.com/thliang01", "followers_url": "https://api.github.com/users/thliang01/followers", "following_url": "https://api.github.com/users/thliang01/following{/other_user}", "gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}", "starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thliang01/subscriptions", "organizations_url": "https://api.github.com/users/thliang01/orgs", "repos_url": "https://api.github.com/users/thliang01/repos", "events_url": "https://api.github.com/users/thliang01/events{/privacy}", "received_events_url": "https://api.github.com/users/thliang01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@thliang01 Could you try to resolve the conflict :-) thank you 🙏 ", "Hi @thliang01 I tried to fix the conflict and make it clean. It should work now - I will merge once the CI are all good. Thank you again for your contribution 👍 💯 !" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Add `configuration_transfo_xl.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19651/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19651", "html_url": "https://github.com/huggingface/transformers/pull/19651", "diff_url": "https://github.com/huggingface/transformers/pull/19651.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19651.patch", "merged_at": 1666018074000 }
https://api.github.com/repos/huggingface/transformers/issues/19650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19650/comments
https://api.github.com/repos/huggingface/transformers/issues/19650/events
https://github.com/huggingface/transformers/pull/19650
1,410,206,183
PR_kwDOCUB6oc5A3FsZ
19,650
[Doctest] Add configuration_xlnet.py
{ "login": "thliang01", "id": 21286104, "node_id": "MDQ6VXNlcjIxMjg2MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21286104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thliang01", "html_url": "https://github.com/thliang01", "followers_url": "https://api.github.com/users/thliang01/followers", "following_url": "https://api.github.com/users/thliang01/following{/other_user}", "gists_url": "https://api.github.com/users/thliang01/gists{/gist_id}", "starred_url": "https://api.github.com/users/thliang01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thliang01/subscriptions", "organizations_url": "https://api.github.com/users/thliang01/orgs", "repos_url": "https://api.github.com/users/thliang01/repos", "events_url": "https://api.github.com/users/thliang01/events{/privacy}", "received_events_url": "https://api.github.com/users/thliang01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Add configuration_xlnet.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19650/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19650", "html_url": "https://github.com/huggingface/transformers/pull/19650", "diff_url": "https://github.com/huggingface/transformers/pull/19650.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19650.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19649/comments
https://api.github.com/repos/huggingface/transformers/issues/19649/events
https://github.com/huggingface/transformers/pull/19649
1,410,202,037
PR_kwDOCUB6oc5A3E9W
19,649
[Doctest] Add configuration_xlnet.py
{ "login": "AymenBer99", "id": 67442508, "node_id": "MDQ6VXNlcjY3NDQyNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/67442508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AymenBer99", "html_url": "https://github.com/AymenBer99", "followers_url": "https://api.github.com/users/AymenBer99/followers", "following_url": "https://api.github.com/users/AymenBer99/following{/other_user}", "gists_url": "https://api.github.com/users/AymenBer99/gists{/gist_id}", "starred_url": "https://api.github.com/users/AymenBer99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AymenBer99/subscriptions", "organizations_url": "https://api.github.com/users/AymenBer99/orgs", "repos_url": "https://api.github.com/users/AymenBer99/repos", "events_url": "https://api.github.com/users/AymenBer99/events{/privacy}", "received_events_url": "https://api.github.com/users/AymenBer99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,665
CONTRIBUTOR
null
Add configuration_xlnet.py to utils/documentation_tests.txt for doctest. Based #19487 @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19649/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19649", "html_url": "https://github.com/huggingface/transformers/pull/19649", "diff_url": "https://github.com/huggingface/transformers/pull/19649.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19649.patch", "merged_at": 1665996337000 }
https://api.github.com/repos/huggingface/transformers/issues/19648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19648/comments
https://api.github.com/repos/huggingface/transformers/issues/19648/events
https://github.com/huggingface/transformers/pull/19648
1,410,181,970
PR_kwDOCUB6oc5A3BON
19,648
fix image2test args forwarding
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I took the liberty of making the changes I was talking about. Feel free to revert if this doesn't suit you.\r\n\r\nBeing stateless in the pipeline is essential, we really cannot use `self` to pass around information (it messes with threading and batching)" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19628 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19648/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19648", "html_url": "https://github.com/huggingface/transformers/pull/19648", "diff_url": "https://github.com/huggingface/transformers/pull/19648.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19648.patch", "merged_at": 1666619364000 }
https://api.github.com/repos/huggingface/transformers/issues/19647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19647/comments
https://api.github.com/repos/huggingface/transformers/issues/19647/events
https://github.com/huggingface/transformers/pull/19647
1,410,174,869
PR_kwDOCUB6oc5A2_4l
19,647
[Doctest] Add `configuration_clip.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@daspartho Very cool!\r\n\r\nWould you like to take the challenge to add doc example to `CLIPConfig`, which will use `from_text_vision_configs`. This will help a lot of the library users 🔥 Let me know :-)", "@ydshieh Sure! I'd like to take on the task =)", "@ydshieh made some changes; could you please check if it looks good?", "@ydshieh added an example using the `from_text_vision_configs` method; could you please review the changes to see if they're okay? \r\nThanks :)", "Just a final comment and we are ready to merge!", "@ydshieh made the suggested changes; good to go =)", "Thank you for the PR and your patience :-) @daspartho " ]
1,665
1,666
1,666
CONTRIBUTOR
null
Add `configuration_clip.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 Noticed the model initialization and config initialization lines were switched so made some additional changes to correct it. @ydshieh could you please check if it's okay? Thank you =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19647/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19647", "html_url": "https://github.com/huggingface/transformers/pull/19647", "diff_url": "https://github.com/huggingface/transformers/pull/19647.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19647.patch", "merged_at": 1666165886000 }
https://api.github.com/repos/huggingface/transformers/issues/19646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19646/comments
https://api.github.com/repos/huggingface/transformers/issues/19646/events
https://github.com/huggingface/transformers/pull/19646
1,410,161,202
PR_kwDOCUB6oc5A29R_
19,646
[Doctest] Add configuration_realm.py
{ "login": "ak04p", "id": 97516055, "node_id": "U_kgDOBc_6Fw", "avatar_url": "https://avatars.githubusercontent.com/u/97516055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ak04p", "html_url": "https://github.com/ak04p", "followers_url": "https://api.github.com/users/ak04p/followers", "following_url": "https://api.github.com/users/ak04p/following{/other_user}", "gists_url": "https://api.github.com/users/ak04p/gists{/gist_id}", "starred_url": "https://api.github.com/users/ak04p/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ak04p/subscriptions", "organizations_url": "https://api.github.com/users/ak04p/orgs", "repos_url": "https://api.github.com/users/ak04p/repos", "events_url": "https://api.github.com/users/ak04p/events{/privacy}", "received_events_url": "https://api.github.com/users/ak04p/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @ak04p Thank you for the PR. We should also change this line\r\n\r\n```\r\nfrom transformers import RealmEmbedder, RealmConfig\r\n```", "Thank you for the feedback, I'll correct it." ]
1,665
1,666
1,666
CONTRIBUTOR
null
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add configuration_realm.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @ydshieh could you please check it? Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19646/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19646", "html_url": "https://github.com/huggingface/transformers/pull/19646", "diff_url": "https://github.com/huggingface/transformers/pull/19646.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19646.patch", "merged_at": 1666032804000 }
https://api.github.com/repos/huggingface/transformers/issues/19645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19645/comments
https://api.github.com/repos/huggingface/transformers/issues/19645/events
https://github.com/huggingface/transformers/pull/19645
1,410,154,948
PR_kwDOCUB6oc5A28Gc
19,645
Fix a typo in the preprocessing tutorial
{ "login": "Quasar-Kim", "id": 35187730, "node_id": "MDQ6VXNlcjM1MTg3NzMw", "avatar_url": "https://avatars.githubusercontent.com/u/35187730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Quasar-Kim", "html_url": "https://github.com/Quasar-Kim", "followers_url": "https://api.github.com/users/Quasar-Kim/followers", "following_url": "https://api.github.com/users/Quasar-Kim/following{/other_user}", "gists_url": "https://api.github.com/users/Quasar-Kim/gists{/gist_id}", "starred_url": "https://api.github.com/users/Quasar-Kim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Quasar-Kim/subscriptions", "organizations_url": "https://api.github.com/users/Quasar-Kim/orgs", "repos_url": "https://api.github.com/users/Quasar-Kim/repos", "events_url": "https://api.github.com/users/Quasar-Kim/events{/privacy}", "received_events_url": "https://api.github.com/users/Quasar-Kim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My fault, there's nothing wrong with the tutorial. 😮‍💨", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19645). All of your documentation changes will be reflected on that endpoint." ]
1,665
1,665
1,665
NONE
null
# What does this PR do? Fixed a typo in transformers [tutorials > preprocess](https://huggingface.co/docs/transformers/preprocessing). Currently the code and its output does not match. Please see last two cells from this [colab notebook](https://colab.research.google.com/drive/18WjgFPtQu4n8k6qAWtpsjVwDmICDrcfD#scrollTo=4LsMdS8plf-K); `dataset[0]["image"]` is a PIL image, not a dictionary as present in the current version of the tutorial. It should be `dataset[0]`. (Please ignore the change at line 490, Github is showing the wrong diff. There's no actual changes.) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19645/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19645", "html_url": "https://github.com/huggingface/transformers/pull/19645", "diff_url": "https://github.com/huggingface/transformers/pull/19645.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19645.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19644/comments
https://api.github.com/repos/huggingface/transformers/issues/19644/events
https://github.com/huggingface/transformers/pull/19644
1,410,152,931
PR_kwDOCUB6oc5A27s8
19,644
Improve DETR models
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR: - [x] fixes Deformable DETR's loss function - [x] adds more copied from statements for consistency - [x] fixes Conditional DETR's integration tests. As pointed out in #18948, Deformable DETR uses the same loss function and Hungarian matcher as Conditional DETR (use of sigmoid instead of softmax and not including the no-object class). This PR also improves the (original; conditional; deformable) DETR models by improving docs, adding more Copied from statements.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19644", "html_url": "https://github.com/huggingface/transformers/pull/19644", "diff_url": "https://github.com/huggingface/transformers/pull/19644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19644.patch", "merged_at": 1666081754000 }
https://api.github.com/repos/huggingface/transformers/issues/19643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19643/comments
https://api.github.com/repos/huggingface/transformers/issues/19643/events
https://github.com/huggingface/transformers/pull/19643
1,410,147,187
PR_kwDOCUB6oc5A26lr
19,643
[Doctest] Add configuration_convbert.py
{ "login": "AymenBer99", "id": 67442508, "node_id": "MDQ6VXNlcjY3NDQyNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/67442508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AymenBer99", "html_url": "https://github.com/AymenBer99", "followers_url": "https://api.github.com/users/AymenBer99/followers", "following_url": "https://api.github.com/users/AymenBer99/following{/other_user}", "gists_url": "https://api.github.com/users/AymenBer99/gists{/gist_id}", "starred_url": "https://api.github.com/users/AymenBer99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AymenBer99/subscriptions", "organizations_url": "https://api.github.com/users/AymenBer99/orgs", "repos_url": "https://api.github.com/users/AymenBer99/repos", "events_url": "https://api.github.com/users/AymenBer99/events{/privacy}", "received_events_url": "https://api.github.com/users/AymenBer99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you again, @AymenBer99 🚀 " ]
1,665
1,666
1,666
CONTRIBUTOR
null
Add configuration_convbert.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19643/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19643", "html_url": "https://github.com/huggingface/transformers/pull/19643", "diff_url": "https://github.com/huggingface/transformers/pull/19643.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19643.patch", "merged_at": 1666031358000 }
https://api.github.com/repos/huggingface/transformers/issues/19642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19642/comments
https://api.github.com/repos/huggingface/transformers/issues/19642/events
https://github.com/huggingface/transformers/pull/19642
1,410,140,236
PR_kwDOCUB6oc5A25QF
19,642
get rid from bart attention copypaste
{ "login": "kventinel", "id": 14203222, "node_id": "MDQ6VXNlcjE0MjAzMjIy", "avatar_url": "https://avatars.githubusercontent.com/u/14203222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kventinel", "html_url": "https://github.com/kventinel", "followers_url": "https://api.github.com/users/kventinel/followers", "following_url": "https://api.github.com/users/kventinel/following{/other_user}", "gists_url": "https://api.github.com/users/kventinel/gists{/gist_id}", "starred_url": "https://api.github.com/users/kventinel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kventinel/subscriptions", "organizations_url": "https://api.github.com/users/kventinel/orgs", "repos_url": "https://api.github.com/users/kventinel/repos", "events_url": "https://api.github.com/users/kventinel/events{/privacy}", "received_events_url": "https://api.github.com/users/kventinel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19642). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR, but having the models be independent of each other is a core principle of the philosophy of the library. You can learn more about it in [this blog post](https://huggingface.co/blog/transformers-design-philosophy).", "> Thanks for your PR, but having the models be independent of each other is a core principle of the philosophy of the library. You can learn more about it in [this blog post](https://huggingface.co/blog/transformers-design-philosophy).\r\n\r\nEven small functions like `expand_mask`?\r\n\r\nAlso in some files this PR catch problem, when some functions/classes was copied without `#Copied from` mechanism. \r\n\r\nP.S. In some way this PR not very different from your ~D~RY philosophy, because arguments like easy to patch one files stiil exists, because each one can change Attention for own file like he wants using just copy and change." ]
1,665
1,667
1,667
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19642/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19642", "html_url": "https://github.com/huggingface/transformers/pull/19642", "diff_url": "https://github.com/huggingface/transformers/pull/19642.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19642.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19641/comments
https://api.github.com/repos/huggingface/transformers/issues/19641/events
https://github.com/huggingface/transformers/pull/19641
1,410,137,324
PR_kwDOCUB6oc5A24tO
19,641
[Doctest] Add configuration_conditional_detr.py
{ "login": "AymenBer99", "id": 67442508, "node_id": "MDQ6VXNlcjY3NDQyNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/67442508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AymenBer99", "html_url": "https://github.com/AymenBer99", "followers_url": "https://api.github.com/users/AymenBer99/followers", "following_url": "https://api.github.com/users/AymenBer99/following{/other_user}", "gists_url": "https://api.github.com/users/AymenBer99/gists{/gist_id}", "starred_url": "https://api.github.com/users/AymenBer99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AymenBer99/subscriptions", "organizations_url": "https://api.github.com/users/AymenBer99/orgs", "repos_url": "https://api.github.com/users/AymenBer99/repos", "events_url": "https://api.github.com/users/AymenBer99/events{/privacy}", "received_events_url": "https://api.github.com/users/AymenBer99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,665
CONTRIBUTOR
null
Add configuration_conditional_detr.py to utils/documentation_tests.txt for doctest. Based on #19487 @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19641/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19641", "html_url": "https://github.com/huggingface/transformers/pull/19641", "diff_url": "https://github.com/huggingface/transformers/pull/19641.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19641.patch", "merged_at": 1665996175000 }
https://api.github.com/repos/huggingface/transformers/issues/19640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19640/comments
https://api.github.com/repos/huggingface/transformers/issues/19640/events
https://github.com/huggingface/transformers/pull/19640
1,410,109,770
PR_kwDOCUB6oc5A2zXW
19,640
Fixed the docstring and type hint for forced_decoder_ids option in Ge…
{ "login": "koreyou", "id": 5196226, "node_id": "MDQ6VXNlcjUxOTYyMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5196226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koreyou", "html_url": "https://github.com/koreyou", "followers_url": "https://api.github.com/users/koreyou/followers", "following_url": "https://api.github.com/users/koreyou/following{/other_user}", "gists_url": "https://api.github.com/users/koreyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/koreyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koreyou/subscriptions", "organizations_url": "https://api.github.com/users/koreyou/orgs", "repos_url": "https://api.github.com/users/koreyou/repos", "events_url": "https://api.github.com/users/koreyou/events{/privacy}", "received_events_url": "https://api.github.com/users/koreyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@gante Thanks for the comment! I've taken in your suggestion in the new commit. Please proceed with the merge if it is looking good.", "@koreyou Awesome, thank you for the changes! I will merge as soon as CI turns to green :)" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR fixes #19602 where the docstring and type hint for forced_decoder_ids option in GenerationMixin.generate were inconsistent with the actual implementation. Fixes #19602 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante has suggested me to send a PR for the issue #19602.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19640/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19640", "html_url": "https://github.com/huggingface/transformers/pull/19640", "diff_url": "https://github.com/huggingface/transformers/pull/19640.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19640.patch", "merged_at": 1666022403000 }
https://api.github.com/repos/huggingface/transformers/issues/19639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19639/comments
https://api.github.com/repos/huggingface/transformers/issues/19639/events
https://github.com/huggingface/transformers/pull/19639
1,410,089,046
PR_kwDOCUB6oc5A2vYe
19,639
Circleci project setup
{ "login": "AShreyam", "id": 115744267, "node_id": "U_kgDOBuYeCw", "avatar_url": "https://avatars.githubusercontent.com/u/115744267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AShreyam", "html_url": "https://github.com/AShreyam", "followers_url": "https://api.github.com/users/AShreyam/followers", "following_url": "https://api.github.com/users/AShreyam/following{/other_user}", "gists_url": "https://api.github.com/users/AShreyam/gists{/gist_id}", "starred_url": "https://api.github.com/users/AShreyam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AShreyam/subscriptions", "organizations_url": "https://api.github.com/users/AShreyam/orgs", "repos_url": "https://api.github.com/users/AShreyam/repos", "events_url": "https://api.github.com/users/AShreyam/events{/privacy}", "received_events_url": "https://api.github.com/users/AShreyam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not too sure what the goal of this PR is :-)" ]
1,665
1,666
1,666
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19639/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19639", "html_url": "https://github.com/huggingface/transformers/pull/19639", "diff_url": "https://github.com/huggingface/transformers/pull/19639.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19639.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19638/comments
https://api.github.com/repos/huggingface/transformers/issues/19638/events
https://github.com/huggingface/transformers/pull/19638
1,410,049,231
PR_kwDOCUB6oc5A2npR
19,638
Add return types for tensorflow GPT-J, XLM, and XLNet
{ "login": "sirmammingtonham", "id": 3794630, "node_id": "MDQ6VXNlcjM3OTQ2MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/3794630?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sirmammingtonham", "html_url": "https://github.com/sirmammingtonham", "followers_url": "https://api.github.com/users/sirmammingtonham/followers", "following_url": "https://api.github.com/users/sirmammingtonham/following{/other_user}", "gists_url": "https://api.github.com/users/sirmammingtonham/gists{/gist_id}", "starred_url": "https://api.github.com/users/sirmammingtonham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sirmammingtonham/subscriptions", "organizations_url": "https://api.github.com/users/sirmammingtonham/orgs", "repos_url": "https://api.github.com/users/sirmammingtonham/repos", "events_url": "https://api.github.com/users/sirmammingtonham/events{/privacy}", "received_events_url": "https://api.github.com/users/sirmammingtonham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Adds return types for model classes in tensorflow GPT-J, XLM, and XLNet as tasked in #16059. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19638/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19638", "html_url": "https://github.com/huggingface/transformers/pull/19638", "diff_url": "https://github.com/huggingface/transformers/pull/19638.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19638.patch", "merged_at": 1666010842000 }
https://api.github.com/repos/huggingface/transformers/issues/19637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19637/comments
https://api.github.com/repos/huggingface/transformers/issues/19637/events
https://github.com/huggingface/transformers/pull/19637
1,410,027,040
PR_kwDOCUB6oc5A2jbh
19,637
[Doctest] Add `configuration_data2vec_vision.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh made the required changes :)" ]
1,665
1,666
1,666
CONTRIBUTOR
null
Add `configuration_data2vec_vision.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thank you =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19637", "html_url": "https://github.com/huggingface/transformers/pull/19637", "diff_url": "https://github.com/huggingface/transformers/pull/19637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19637.patch", "merged_at": 1666033002000 }
https://api.github.com/repos/huggingface/transformers/issues/19636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19636/comments
https://api.github.com/repos/huggingface/transformers/issues/19636/events
https://github.com/huggingface/transformers/pull/19636
1,410,026,888
PR_kwDOCUB6oc5A2jZy
19,636
[Doctest] Add `configuration_data2vec_text.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ydshieh made the suggested changes =)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,665
CONTRIBUTOR
null
Add `configuration_data2vec_text.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19636/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19636", "html_url": "https://github.com/huggingface/transformers/pull/19636", "diff_url": "https://github.com/huggingface/transformers/pull/19636.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19636.patch", "merged_at": 1665995674000 }
https://api.github.com/repos/huggingface/transformers/issues/19635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19635/comments
https://api.github.com/repos/huggingface/transformers/issues/19635/events
https://github.com/huggingface/transformers/pull/19635
1,410,026,702
PR_kwDOCUB6oc5A2jXe
19,635
[Doctest] Add `configuration_data2vec_audio.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you pull the latest `main` from the remote (which include your PR with other data2vec config files), and rebase your PR branch on the new `main` 🙏 . We need to fix the conflict changes", "@ydshieh rebased the branch, it should resolve the conflict :)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,666
1,666
CONTRIBUTOR
null
Add `configuration_data2vec_audio.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19635/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19635", "html_url": "https://github.com/huggingface/transformers/pull/19635", "diff_url": "https://github.com/huggingface/transformers/pull/19635.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19635.patch", "merged_at": 1666024755000 }
https://api.github.com/repos/huggingface/transformers/issues/19634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19634/comments
https://api.github.com/repos/huggingface/transformers/issues/19634/events
https://github.com/huggingface/transformers/pull/19634
1,410,014,331
PR_kwDOCUB6oc5A2g9a
19,634
Marian docstring
{ "login": "traveler-pinkie", "id": 75712292, "node_id": "MDQ6VXNlcjc1NzEyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/75712292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/traveler-pinkie", "html_url": "https://github.com/traveler-pinkie", "followers_url": "https://api.github.com/users/traveler-pinkie/followers", "following_url": "https://api.github.com/users/traveler-pinkie/following{/other_user}", "gists_url": "https://api.github.com/users/traveler-pinkie/gists{/gist_id}", "starred_url": "https://api.github.com/users/traveler-pinkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/traveler-pinkie/subscriptions", "organizations_url": "https://api.github.com/users/traveler-pinkie/orgs", "repos_url": "https://api.github.com/users/traveler-pinkie/repos", "events_url": "https://api.github.com/users/traveler-pinkie/events{/privacy}", "received_events_url": "https://api.github.com/users/traveler-pinkie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I need some help. This is my first time contributing to a project ever. I'm getting familiar with the process of git, and I've tried to follow the instructions layed out on the [Community Event] Doc Tests Sprint #16292. I ran the doc test locally and received to errors. But upon further inspection, I can't seem to find what the issue is. Please if you can offer come guidance on what I'm doing wrong and how to progress. I'm working on modeling_tf_marian.py ", "![Screenshot 2022-10-13 223655](https://user-images.githubusercontent.com/75712292/195965122-349001fc-b173-488c-a790-cede71b8bf4a.png)\r\n", "@traveler-pinkie sorry for being late here. Are you still interested in working on Marian config files?", "@ydshieh . Thanks for commenting back. But I think at the moment I should probably study a little bit more. It looks like I was having more difficulty then I should have. With my skills currently. I think it's best if someone else works on it. Sorry and thank you" ]
1,665
1,666
1,665
NONE
null
# What does this PR do? related to #16292 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh, @patrickvonplaten Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19634/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19634", "html_url": "https://github.com/huggingface/transformers/pull/19634", "diff_url": "https://github.com/huggingface/transformers/pull/19634.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19634.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19633/comments
https://api.github.com/repos/huggingface/transformers/issues/19633/events
https://github.com/huggingface/transformers/pull/19633
1,409,995,147
PR_kwDOCUB6oc5A2dIi
19,633
[Doctest] Add configuration_codegen.py
{ "login": "AymenBer99", "id": 67442508, "node_id": "MDQ6VXNlcjY3NDQyNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/67442508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AymenBer99", "html_url": "https://github.com/AymenBer99", "followers_url": "https://api.github.com/users/AymenBer99/followers", "following_url": "https://api.github.com/users/AymenBer99/following{/other_user}", "gists_url": "https://api.github.com/users/AymenBer99/gists{/gist_id}", "starred_url": "https://api.github.com/users/AymenBer99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AymenBer99/subscriptions", "organizations_url": "https://api.github.com/users/AymenBer99/orgs", "repos_url": "https://api.github.com/users/AymenBer99/repos", "events_url": "https://api.github.com/users/AymenBer99/events{/privacy}", "received_events_url": "https://api.github.com/users/AymenBer99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Add configuration_codegen.py to utils/documentation_tests.txt for doctest. Based on #19487 @sgugger @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19633/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19633", "html_url": "https://github.com/huggingface/transformers/pull/19633", "diff_url": "https://github.com/huggingface/transformers/pull/19633.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19633.patch", "merged_at": 1665830135000 }
https://api.github.com/repos/huggingface/transformers/issues/19632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19632/comments
https://api.github.com/repos/huggingface/transformers/issues/19632/events
https://github.com/huggingface/transformers/pull/19632
1,409,887,002
PR_kwDOCUB6oc5A2GUZ
19,632
Proof of Concept - Better Transformers
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19632). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? A Proof of Concept of the Better Transformers integration into `transformers` - more details coming soon. Also comparing this implementation with an integration in optimum https://github.com/huggingface/optimum/pull/422 cc @HamidShojanazeri https://github.com/huggingface/transformers/pull/19553
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19632/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19632", "html_url": "https://github.com/huggingface/transformers/pull/19632", "diff_url": "https://github.com/huggingface/transformers/pull/19632.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19632.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19631/comments
https://api.github.com/repos/huggingface/transformers/issues/19631/events
https://github.com/huggingface/transformers/issues/19631
1,409,796,214
I_kwDOCUB6oc5UB8h2
19,631
Add EDSR and MDSR
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@venkat-natchi\r\n", "I'm going through the paper and the existing implementation. I will open the PR in few days. ", "> I'm going through the paper and the existing implementation. I will open the PR in few days.\r\n\r\nThat's great, I have already gone through the paper and the architecture and started the implementation!", "Sorry, I was away due to festival season here. \r\nI am done with the paper. Shall I start transforming [this](https://github.com/sanghyun-son/EDSR-PyTorch/blob/9d3bb0ec620ea2ac1b5e5e7a32b0133fbba66fd2/src/model/edsr.py) one into the HuggingFace model standards?", "Go ahead and let me know if you need any help; I will be adding the MDSR model to the hub, and I believe the code should be similar to this [Swin2sr](https://github.com/huggingface/transformers/pull/19784) PR also we can use it for reference.", "Sure, thanks", "Can we add both EDSR and MDSR in this PR?", "No, I guess a separate PR is required.", "Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.", "Thanks @asrimanth for the interest. \r\nI have an active PR going on for this issue.\r\n#19952 \r\n\r\nKindly leave your comments there if you could. " ]
1,665
1,676
null
CONTRIBUTOR
null
### Model description EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) is for image super resolution, here's the [paper](https://arxiv.org/abs/1707.02921). ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Official Implementation https://github.com/sanghyun-son/EDSR-PyTorch and https://github.com/LimBee/NTIRE2017 ## Your contribution I'd like to work on incorporating this architecture into the HuggingFace. Please let me know if you think it's worth adding to huggingface. @NielsRogge can you review this issue so that I can get started
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19631/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/19630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19630/comments
https://api.github.com/repos/huggingface/transformers/issues/19630/events
https://github.com/huggingface/transformers/issues/19630
1,409,729,540
I_kwDOCUB6oc5UBsQE
19,630
num_proc in dataloader affect F1 score in squad
{ "login": "Yang-YiFan", "id": 22767536, "node_id": "MDQ6VXNlcjIyNzY3NTM2", "avatar_url": "https://avatars.githubusercontent.com/u/22767536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yang-YiFan", "html_url": "https://github.com/Yang-YiFan", "followers_url": "https://api.github.com/users/Yang-YiFan/followers", "following_url": "https://api.github.com/users/Yang-YiFan/following{/other_user}", "gists_url": "https://api.github.com/users/Yang-YiFan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yang-YiFan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yang-YiFan/subscriptions", "organizations_url": "https://api.github.com/users/Yang-YiFan/orgs", "repos_url": "https://api.github.com/users/Yang-YiFan/repos", "events_url": "https://api.github.com/users/Yang-YiFan/events{/privacy}", "received_events_url": "https://api.github.com/users/Yang-YiFan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there! Could you make your model public? I cannot reproduce this on [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) (I get the same scores for the two commands).", "Hi, I believe the one I'm using `csarron/bert-base-uncased-squad-v1` is public. I also tried `bert-large-uncased-whole-word-masking-finetuned-squad` and it's showing the same F1 score mismatch. Thanks!", "Ah, just caught the problem in the logs. `preprocessing_num_workers` is the number of workers sent to `Dataset.map`. It should be left as 1 when using a fast tokenizer. When you change it, you change the way the dataset is preprocessed.\r\n\r\nTo change the number of workers in the dataloader, you should use `dataloader_num_workers`.", "Thanks! The default value is still set to 4 [here](https://github.com/huggingface/transformers/blob/5fda1fbd4625e93d023fe02153ec4a05b26b16cc/examples/pytorch/question-answering/run_qa_no_trainer.py#L111), which should be 1 according to your finding.", "Indeed. Do you want to make a PR to fix this?", "Done." ]
1,665
1,666
1,666
CONTRIBUTOR
null
### System Info transformers 4.18.0 (on CPU) python 3.9 ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In the example question answering script, `cd examples/pytorch/question-answering/` Running the below two cmds will yield different F1 score report (even though their difference is just num workers in dataloader): 1. `python run_qa_no_trainer.py --model_name_or_path csarron/bert-base-uncased-squad-v1 --dataset_name squad --max_seq_length 384 --doc_stride 128 --num_train_epochs 0 --preprocessing_num_workers 1 --output_dir ~/tmp/debug_squad` 2. `python run_qa_no_trainer.py --model_name_or_path csarron/bert-base-uncased-squad-v1 --dataset_name squad --max_seq_length 384 --doc_stride 128 --num_train_epochs 0 --preprocessing_num_workers 4 --output_dir ~/tmp/debug_squad` ### Expected behavior 1. `Evaluation metrics: {'exact_match': 80.90823084200568, 'f1': 88.22754061399627}` 2. `Evaluation metrics: {'exact_match': 76.51844843897824, 'f1': 83.40809222646291}` They have different results on squad.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19630/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19629/comments
https://api.github.com/repos/huggingface/transformers/issues/19629/events
https://github.com/huggingface/transformers/pull/19629
1,409,663,454
PR_kwDOCUB6oc5A1WE7
19,629
[WIP] Making modifications Open source that are live for `BLOOM` inference.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@thomasw21 I tagged you to maybe get advice on the `generation` modifications.\r\n\r\nShould they get merged back into `main` ? (They do seem necessary).", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19629). All of your documentation changes will be reflected on that endpoint.", "> @thomasw21 I tagged you to maybe get advice on the generation modifications.\r\nShould they get merged back into main ? (They do seem necessary).\r\n\r\nIf I remember correctly the only thing that should be needed is the logit post processor.", "@sgugger @LysandreJik \r\n\r\nWe'd like some input as we don't necessarily agree with @thomasw21.\r\nBascially this PR intends to be:\r\n- No breaking change\r\n- Enable users to use TP with bloom with much less effort, even if in a slightly more contrived way\r\n- Enable the same performance we get on production (be as open source as possible). This is where we disagree.\r\n\r\n@thomasw21 thinks, that adding the custom kernel to transformer `main` forces us to be backward compatible for eternity, creates potential headaches because the kernel has only been tested on A100 and might have some unforeseen caveats (1 we know is that it's limited to 4096 tokens)\r\n\r\nMy point, is that we should strive to be open, and so making everything we did accessible should be a goal. Now I agree that maintaining that kernel should NOT be in scope, but I argue that:\r\n- Enabling it actively by users (User has to write ` bloom.use_custom_kernel()` in order to use it)\r\n- Making a proper warning when using that function\r\n- Eventually marking this function as private.\r\nWould enable us to merge it and still allow us to break it whenever we want. We obviously write that as `unstable`/`beta`.\r\n\r\nOne added benefit in the back of my mind, is that allowing us to ship custom kernels could enable us (in all our ecosystem) to boost some performance where needed. Making it core `transformers` not necessarily a goal, but knowing how to enable it seems OK.\r\n\r\n`torch.fx` seems unhappy seems not playing well with `torch.jit.script`.", "First things first, I strongly disagree with most of the modifications done in the modeling code, which make the model code less readable. I think of:\r\n- paths with a test using `if fused_bloom_attention_cuda`\r\n- model gaining a `process_group` argument at init\r\n- use of `tp_rank` attribute\r\n\r\nKeep in mind that we do not let users add a flag to select if the layernorm should be applied at the beginning or the end of the block and request a new architecture instead for instance, or Thom's comments on why the Mistral code in GPT-2 should never have been merged.\r\n\r\nThis should either:\r\n- be a \"fast\" modeling file of its own in the same bloom folder\r\n- a research project of its own (which would have the advantage of not being constrained by any BC considerations)", "> be a \"fast\" modeling file of its own in the same bloom folder\r\n\r\nThat seems reasonable. It's not `fast` vs `non-fast` to be clear. It's TP vs single GPU code difference.\r\nAnd since TP is the best way to get latency optimizations it would feel quite nice if it was included in `transformers`.\r\nActually anywhere in the HF ecosystem would be nice, as long as we could point out to our code and say it's \"there\".\r\n\r\nBut since it's been 3 months and we still haven't figured it out, I decided to go in that fashion. Keeping it in a remote branch is ok, but I feel like a fork (https://github.com/huggingface/transformers_bloom_parallel/issues/8) \r\n \r\n**Separate file is all good for me !**\r\n\r\nBtw, we don't know of anyway to enable TP without modifying the code. \r\n@thomasw21 checked out torch.fx but it seemed to be a pain (and it was the most successful approach). \r\n\r\n> paths with a test using if fused_bloom_attention_cuda\r\n\r\nWould you have 3 files (regular, TP, TP + custom kernel) ? seems like a stretch but fine to me.\r\nWhat is so bad about this particular if ? It's very similar to this:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L662-L675\r\n\r\nThe custom kernel can go. It's a shame imo, and it would be nice if it was easily pluggable, but I can understand the logic.\r\n\r\n> model gaining a process_group argument at init\r\n\r\nNew modeling file would solve that for sure.", "I think two files is good, one for the \"generic\" model code and one with custom improvements for TP/TP+custom CUDA kernel. I mainly want to avoid a researcher coming to take the BLOOM code and being annoyed at all the special paths for TP.\r\n\r\n> What is so bad about this particular if ? It's very similar to this:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L662-L675\r\n\r\nWe want to avoid those kinds of path that hurt readability, at least in mainstream models. Deformable DETR didn't use to have those paths (at first it was custom CUDA kernel only) and is not a mainstream model (maybe in the future?).\r\n\r\nIn any case, most of the code can be copied in the new modeling file and the copied from statements will insure they stay up to date with the original.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? Many things but the biggest are: - TP enabled model (need to pass around a `ProcessGroup` everywhere. There were some discussions back&forth but currently this is not breaking anything, and at least it makes everything quite explicit to load the model in a sharded fashion - Adding 1 custom kernel. Followed roughly DeformatDETR way of distributing this Exception is that the custom kernel is OPT-in, it's not loaded by default (The custom kernel does not support backward for instance). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19629/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19629", "html_url": "https://github.com/huggingface/transformers/pull/19629", "diff_url": "https://github.com/huggingface/transformers/pull/19629.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19629.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19628/comments
https://api.github.com/repos/huggingface/transformers/issues/19628/events
https://github.com/huggingface/transformers/issues/19628
1,409,642,399
I_kwDOCUB6oc5UBW-f
19,628
No way to pass max_length/max_new_tokens to generate for image-to-text pipeline
{ "login": "davidmezzetti", "id": 561939, "node_id": "MDQ6VXNlcjU2MTkzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/561939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidmezzetti", "html_url": "https://github.com/davidmezzetti", "followers_url": "https://api.github.com/users/davidmezzetti/followers", "following_url": "https://api.github.com/users/davidmezzetti/following{/other_user}", "gists_url": "https://api.github.com/users/davidmezzetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidmezzetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidmezzetti/subscriptions", "organizations_url": "https://api.github.com/users/davidmezzetti/orgs", "repos_url": "https://api.github.com/users/davidmezzetti/repos", "events_url": "https://api.github.com/users/davidmezzetti/events{/privacy}", "received_events_url": "https://api.github.com/users/davidmezzetti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The warning is there, and can be safely ignored in both situations IMO.\r\n\r\nThe generation should stop when the model says it's OK, not when you decide it's too big.\r\nHaving `max_new_tokens` in place, would definitely help prevent computing indefinitely if the actual generation never hits EOS so the generation never stopped though.\r\n\r\nThis is what the warnings is trying to warn you about.\r\n\r\n@gante @patrickvonplaten \r\nWhat do you think we should do in that situation ?\r\n\r\nI think calling `generate` with neither `max_length` nor `max_new_tokens` is perfectly OK if you expect EOS to be hit.\r\nYou are running the risk of having an infinite loop (well it would crash when the model OOMS or runs into it's max_length capacity...)\r\n\r\nWhat do you think about defaulting the model max capacity for `max_length` and silencing the warning ?\r\nSince we modified this behavior not too long ago, I understand why the current warning is there, so we could silence the warning much later or by opt-in (so that users that know what they are doing can silence it, but others are still warned).\r\n\r\n\r\n\r\nBeing able to choose `max_new_tokens` in the pipeline should always be doable, since that's a very easy way to prevent an application from randomly crashing when you know that the generation for images should never be too long for instance.\r\n\r\n@OlivierDehaene also since you worked on `image-to-text`.\r\n\r\n", "@Narsil @patrickvonplaten default `max_length` strikes again :D \r\n\r\nNote: any change would have to happen in the context of a major version change. That being said, both defaults have their shortcomings: defaulting to `20` results in short outputs that might be misinterpreted as poor outputs; defaulting to the model's maximum length might not be feasible (e.g. T5), or cause crashes due to memory requirements. A third option would be to make it a required argument (in `generate()`), but that would add friction to text generation 🤔 I honestly don't know which would be the best option.", "I think the whisper models should define a max_length or `max_new_tokens` in the config actually (ideally in the \"future\" generation config). \r\n\r\nRegarding whisper, the model cannot process more than 30seconds of speech which is means that max_length/max_new_tokens almost never goes over 256, so a good/reasonable default for whisper would be 256. Until we don't have better generation configs I think we should set the model config to 256. \r\n\r\nAlso cc'ing @ArthurZucker and @sanchit-gandhi here FYI.\r\n\r\nTo understand better, we get the warning here only because the user passes `max_length=20` which happens to be exactly equal to our default-default max_length in `configuration_utils.py` here: https://github.com/huggingface/transformers/blob/c7edde1a692012eda23bc2b837588557b97ad729/src/transformers/configuration_utils.py#L278 no? " ]
1,665
1,666
1,666
CONTRIBUTOR
null
### System Info Transformers 4.23.1 ### Who can help? @narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline nlp = pipeline("image-to-text") nlp("image.jpg", max_length=20) ``` Results in ``` transformers/generation_utils.py:1301: UserWarning: Neither `max_length` nor `max_new_tokens` has been set ``` ### Expected behavior No warning or a way to disable the warning. This warning is also raised with the `automatic-speech-recognition` pipeline when using the OpenAI Whisper model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19628/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19627/comments
https://api.github.com/repos/huggingface/transformers/issues/19627/events
https://github.com/huggingface/transformers/issues/19627
1,409,628,653
I_kwDOCUB6oc5UBTnt
19,627
Tokenizer not loaded for image-to-text pipelines when specified
{ "login": "davidmezzetti", "id": 561939, "node_id": "MDQ6VXNlcjU2MTkzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/561939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidmezzetti", "html_url": "https://github.com/davidmezzetti", "followers_url": "https://api.github.com/users/davidmezzetti/followers", "following_url": "https://api.github.com/users/davidmezzetti/following{/other_user}", "gists_url": "https://api.github.com/users/davidmezzetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidmezzetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidmezzetti/subscriptions", "organizations_url": "https://api.github.com/users/davidmezzetti/orgs", "repos_url": "https://api.github.com/users/davidmezzetti/repos", "events_url": "https://api.github.com/users/davidmezzetti/events{/privacy}", "received_events_url": "https://api.github.com/users/davidmezzetti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "If that is an unexpected issue, I can debug it and try to fix. ", "Hi @davidmezzetti ,\r\n\r\nThe \"string\" is not resolved in `pipeline` when both `model` and `tokenizer` are sent.\r\nIt should work when you send only 1.\r\n\r\nYou could definitely create a PR to try and resolve `tokenizer` too in that situation.\r\nIt's tricky magic code so be careful.", "Thanks @narsil. In reviewing the way I'm calling pipelines, there is no reason to pass both a `model` and `tokenizer` as they are the same.\r\n\r\nIt appears that this line: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L737 could be updated to also check if the tokenizer is a (str, tuple) for multi models. But it seems like a highly niche/possibly unnecessary use case. ", "`pipeline` is MAGICAL by nature, I'm not against adding even more magic to it. As long as the actual pipeline classes, stay much more down to earth and less magical. \r\n\r\nMagic is super nice when it works, but much harder to work with/evolve when it doesn't :)", "Sounds good, I'll go ahead and close this issue. " ]
1,665
1,666
1,666
CONTRIBUTOR
null
### System Info Transformers 4.23.1 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline nlp = pipeline("image-to-text", model="ydshieh/vit-gpt2-coco-en", tokenizer="ydshieh/vit-gpt2-coco-en") nlp("image.jpg") ``` Results in: ``` File "transformers/pipelines/image_to_text.py", line 89, in postprocess "generated_text": self.tokenizer.decode( AttributeError: 'str' object has no attribute 'decode' ``` ### Expected behavior Caption to be returned. It appears something around this line is being tripped up - https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L733
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19627/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19626/comments
https://api.github.com/repos/huggingface/transformers/issues/19626/events
https://github.com/huggingface/transformers/pull/19626
1,409,599,716
PR_kwDOCUB6oc5A1IZB
19,626
Tokenizer from_pretrained should not use local files named like token…
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
COLLABORATOR
null
…izer files This fixes the issue reported in #19488. Basically, if a user has a local file in the working directory named like any of the files the tokenizer is looking for in `from_pretrained`, for instance `tokenizer.json`, that file is going to be used instead of the file in the repo/folder passed along. The added test fails on current main and is fixed by the PR. Fixes #19488
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19626/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19626", "html_url": "https://github.com/huggingface/transformers/pull/19626", "diff_url": "https://github.com/huggingface/transformers/pull/19626.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19626.patch", "merged_at": 1665770817000 }
https://api.github.com/repos/huggingface/transformers/issues/19625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19625/comments
https://api.github.com/repos/huggingface/transformers/issues/19625/events
https://github.com/huggingface/transformers/issues/19625
1,409,567,034
I_kwDOCUB6oc5UBEk6
19,625
How to use model.prunes when you are using transformers.T5ForConditionalGeneration
{ "login": "CaffreyR", "id": 84232793, "node_id": "MDQ6VXNlcjg0MjMyNzkz", "avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CaffreyR", "html_url": "https://github.com/CaffreyR", "followers_url": "https://api.github.com/users/CaffreyR/followers", "following_url": "https://api.github.com/users/CaffreyR/following{/other_user}", "gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}", "starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions", "organizations_url": "https://api.github.com/users/CaffreyR/orgs", "repos_url": "https://api.github.com/users/CaffreyR/repos", "events_url": "https://api.github.com/users/CaffreyR/events{/privacy}", "received_events_url": "https://api.github.com/users/CaffreyR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Gently pinging @ArthurZucker ", "Thanks @ArthurZucker", "Gently pinging @LysandreJik", "Hey! I'll have a look sorry for the long wait 😃 ", "Anyone can help? Thanks! @gante @ArthurZucker ", "Hey! It seems that the `T5ForConditionalGeneration` is simply missing the `_prune_heads`.\nWe should add the following few lines : \n```\n\ndef _prune_heads(self, heads_to_prune):\n \"\"\"\n Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base\n class PreTrainedModel\n \"\"\"\n for layer, heads in heads_to_prune.items():\n self.modules()[layer].layer[0].SelfAttention.prune_heads(heads)\n\n```\nOr something along those lines! \nWould you like to open a PR for that? Otherwise I will take care of it 🤗", "Hi @ArthurZucker, I will give it a try :)\r\n\r\n", "Hi @ArthurZucker , there seems a little more problem. Since in this line \r\nhttps://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L533\r\nThe model report error when I successfully pruned some heads, it says\r\n```\r\n File \"/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py\", line 533, in forward\r\n mask[list(self.pruned_heads)] = 0\r\nIndexError: index 8 is out of bounds for dimension 0 with size 8\r\n```\r\n\r\nAre we gonna modify the `forward`?\r\n\r\nSince I try to print the `mask.shape` and `self.pruned_heads)`\r\n\r\nIt says\r\n```\r\ntorch.Size([12])\r\n{8, 2, 10, 6}\r\ntorch.Size([8])\r\n{8, 2, 10, 6}\r\nTraceback (most recent call last):\r\n\r\n```\r\n", "Hi @ArthurZucker , I open a PR here. https://github.com/huggingface/transformers/pull/19975\r\n\r\nWe can see the test on a colab https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D\r\n\r\nI use `opendelta` to visualize the pruning process.\r\n\r\nBut we seems to be a forward problem", "We can conclude that the problem is between L531~L548\r\n\r\nThe difference is whether use head_mask\r\n\r\n> Hi @ArthurZucker , there seems a little more problem. Since in this line\r\n> \r\n> https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L533\r\n> \r\n> \r\n> The model report error when I successfully pruned some heads, it says\r\n> ```\r\n> File \"/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py\", line 533, in forward\r\n> mask[list(self.pruned_heads)] = 0\r\n> IndexError: index 8 is out of bounds for dimension 0 with size 8\r\n> ```\r\n> \r\n> Are we gonna modify the `forward`?\r\n> \r\n> Since I try to print the `mask.shape` and `self.pruned_heads)`\r\n> \r\n> It says\r\n> \r\n> ```\r\n> torch.Size([12])\r\n> {8, 2, 10, 6}\r\n> torch.Size([8])\r\n> {8, 2, 10, 6}\r\n> Traceback (most recent call last):\r\n> ```\r\n\r\n\r\nThis one occur in this code\r\n```\r\noutputs = model.forward(\r\n input_ids=context_ids,\r\n attention_mask=context_mask,\r\n labels=labels,\r\n return_dict=True,\r\n # head_mask=head_mask,\r\n # decoder_head_mask=decoder_head_mask\r\n )\r\n```\r\n\r\n\r\n\r\n\r\n> Hi @ArthurZucker , I open a PR here. #19975\r\n> \r\n> We can see the test on a colab https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D\r\n> \r\n> I use `opendelta` to visualize the pruning process.\r\n> \r\n> But we seems to be a forward problem\r\n\r\nThis one use the code \r\n```\r\noutputs = model.forward(\r\n input_ids=context_ids,\r\n attention_mask=context_mask,\r\n labels=labels,\r\n return_dict=True,\r\n head_mask=head_mask,\r\n decoder_head_mask=decoder_head_mask\r\n )\r\n```", "Hi @ArthurZucker , what is the function of `position_bias` ? It seems in this line \r\n\r\nhttps://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/t5/modeling_t5.py#L513\r\n\r\nWe calculate it, and it seems only modify `position_bias` only when the first block. I want to delete `if` or put the `scores += position_bias_masked` into `if`, which means we calculate `position_bias` only in the first block or we calculate `position_bias` all the blocks and it should be the same to the score.\r\n\r\nYou can see the code here\r\n```\r\nscores = torch.matmul(\r\n query_states, key_states.transpose(3, 2)\r\n ) # equivalent of torch.einsum(\"bnqd,bnkd->bnqk\", query_states, key_states), compatible with onnx op>9\r\n print(\"Score\",scores.shape)\r\n if position_bias is None:\r\n print(\"A\", position_bias)\r\n if not self.has_relative_attention_bias:\r\n position_bias = torch.zeros(\r\n (1, self.n_heads, real_seq_length, key_length), device=scores.device, dtype=scores.dtype\r\n )\r\n print(\"B\",position_bias.shape)\r\n if self.gradient_checkpointing and self.training:\r\n position_bias.requires_grad = True\r\n else:\r\n position_bias = self.compute_bias(real_seq_length, key_length, device=scores.device)\r\n print(\"C\", position_bias.shape)\r\n # if key and values are already calculated\r\n # we want only the last query position bias\r\n if past_key_value is not None:\r\n position_bias = position_bias[:, :, -hidden_states.size(1) :, :]\r\n\r\n if mask is not None:\r\n position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length)\r\n\r\n if self.pruned_heads:\r\n position_bias_masked = position_bias\r\n # print(self.pruned_heads)\r\n # mask = torch.ones(position_bias.shape[1])\r\n # mask[list(self.pruned_heads)] = 0\r\n # print(\"Position bias\",position_bias.shape)\r\n # position_bias_masked = position_bias[:, mask.bool()]\r\n # print(\"Position bias masked\",position_bias_masked.shape)\r\n else:\r\n position_bias_masked = position_bias\r\n\r\n\r\n scores += position_bias_masked\r\n```\r\n\r\nAnd output is here\r\n```\r\nScore torch.Size([1, 8, 2, 2])\r\nA None\r\nC torch.Size([1, 8, 2, 2])\r\nquery_states torch.Size([1, 8, 2, 64])\r\nkey_states.transpose(3, 2) torch.Size([1, 8, 64, 200])\r\nScore torch.Size([1, 8, 2, 200])\r\nA None\r\nB torch.Size([1, 8, 2, 200])\r\nquery_states torch.Size([1, 9, 2, 64])\r\nkey_states.transpose(3, 2) torch.Size([1, 9, 64, 2])\r\nScore torch.Size([1, 9, 2, 2])\r\nTraceback (most recent call last):\r\n File \"/Users/caffrey/Documents/research/FiD/prunetest2.py\", line 72, in <module>\r\n outputs = model.forward(\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 1658, in forward\r\n decoder_outputs = self.decoder(\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 1050, in forward\r\n layer_outputs = layer_module(\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 683, in forward\r\n self_attention_outputs = self.layer[0](\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 589, in forward\r\n attention_output = self.SelfAttention(\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/caffrey/miniforge3/envs/huggingface/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py\", line 548, in forward\r\n scores += position_bias_masked\r\nRuntimeError: The size of tensor a (9) must match the size of tensor b (8) at non-singleton dimension 1\r\n\r\n```\r\nSo the `position_bias` can not have the same shape as `score`\r\n\r\nI also add a code in _prune_head to re-define the `self.relative_attention_bias`\r\n", "In the PR https://github.com/huggingface/transformers/pull/19975/ , I delete `if` so that it could match with the shape of score. (It could run the colab notebook), but I do not know whether its meaning is right! Basically, the only problem is to make `position_bias ` have the same shape as `score`", "gently pin @patrickvonplaten @ArthurZucker Many thanks", "I will have a look 🤗", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @CaffreyR\r\n\r\nI am also facing the same issue. Did you find a fix yet?\r\n\r\nReally appreciate any help! Raised an issue recently, so am willing to close if there's some progress :) " ]
1,665
1,698
1,672
NONE
null
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.13.0.dev20220709 (False) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use this code to prune the model from `T5ForConditionalGeneration`, but it went wrong. Many thanks for your time!:) ``` from transformers import T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained('t5-base') prune_heads = {} prune_heads[0] = [0,1] model.prune_heads(prune_heads) ``` ### Expected behavior ``` Traceback (most recent call last): File "/Users/caffrey/Documents/research/FiD/prunetest.py", line 8, in <module> model.prune_heads(prune_heads) File "/Users/caffrey/miniforge3/envs/tongji/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1507, in prune_heads self.base_model._prune_heads(heads_to_prune) File "/Users/caffrey/miniforge3/envs/tongji/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1261, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'T5ForConditionalGeneration' object has no attribute '_prune_heads' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19625/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19624/comments
https://api.github.com/repos/huggingface/transformers/issues/19624/events
https://github.com/huggingface/transformers/pull/19624
1,409,558,262
PR_kwDOCUB6oc5A0_l_
19,624
Fixing DeformableDETR the easy but not the best way (IMO).
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19624). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR, so the issue would be resolved if these tensors have `batch_size` in the first dimension (rather than the second)?\r\n\r\nI think we can still fix that.", "> Thanks for your PR, so the issue would be resolved if these tensors have batch_size in the first dimension (rather than the second)?\r\n\r\nYes it should. When unknown tensors are seen, the batching/unbatching assumes the first dimension is the batch_size.", "We fixed it correctly" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR fixes the pipeline to accomodate DeformableDETR. The core of the issue is https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L178-L181 Both these tensor don't use `batch_size` in the first place, so the magic batchig/debatching of the pipeline is confused. It's not entirely clear why the tensor has this weird shape though. @NielsRogge is there any reason ? Regardless we can't really change that this it would be a breaking change. The culprit lines are here: https://github.com/huggingface/transformers/blob/585f9c6d9efa9f6e93888b6adf84912ba3f98dfc/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1397-L1398 @sgugger I though I remembered some tensors where not returned by default with `return_loss=False`. Would that be something acceptable ? The pipeline fix is fine to avoid all complications, but it's brittle since during batching/debatching the pipeline has only access to the tensor name, so if any other model reuses the same name we're going to have a bigger issue. Fixes: https://github.com/huggingface/transformers/issues/19024 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19624/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19624", "html_url": "https://github.com/huggingface/transformers/pull/19624", "diff_url": "https://github.com/huggingface/transformers/pull/19624.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19624.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19623/comments
https://api.github.com/repos/huggingface/transformers/issues/19623/events
https://github.com/huggingface/transformers/pull/19623
1,409,525,020
PR_kwDOCUB6oc5A04cM
19,623
Add doctest info in testingmdx
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, gonna clean the github history " ]
1,665
1,665
1,665
COLLABORATOR
null
# What does this PR do? Finding information on how to properly test the docstring is currently pretty hard! As far as I know, the only good explanation is in `transformers/utils/prepare_for_doc_test.py` but it does not appear in the doc. I hope this will help people debug their docstring examples.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19623/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19623", "html_url": "https://github.com/huggingface/transformers/pull/19623", "diff_url": "https://github.com/huggingface/transformers/pull/19623.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19623.patch", "merged_at": 1665998600000 }
https://api.github.com/repos/huggingface/transformers/issues/19622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19622/comments
https://api.github.com/repos/huggingface/transformers/issues/19622/events
https://github.com/huggingface/transformers/pull/19622
1,409,489,218
PR_kwDOCUB6oc5A0wy2
19,622
[Doctest] Add `configuration_levit.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_levit.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could please check it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19622/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19622", "html_url": "https://github.com/huggingface/transformers/pull/19622", "diff_url": "https://github.com/huggingface/transformers/pull/19622.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19622.patch", "merged_at": 1665760884000 }
https://api.github.com/repos/huggingface/transformers/issues/19621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19621/comments
https://api.github.com/repos/huggingface/transformers/issues/19621/events
https://github.com/huggingface/transformers/pull/19621
1,409,488,404
PR_kwDOCUB6oc5A0wnn
19,621
[Doctest] Add `configuration_distilbert.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_distilbert.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19621", "html_url": "https://github.com/huggingface/transformers/pull/19621", "diff_url": "https://github.com/huggingface/transformers/pull/19621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19621.patch", "merged_at": 1665760950000 }
https://api.github.com/repos/huggingface/transformers/issues/19620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19620/comments
https://api.github.com/repos/huggingface/transformers/issues/19620/events
https://github.com/huggingface/transformers/pull/19620
1,409,487,739
PR_kwDOCUB6oc5A0weg
19,620
[Doctest] Add `configuration_resnet.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Could you run `make style` to see what's wrong 🙏 ?" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_resnet.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19620/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19620", "html_url": "https://github.com/huggingface/transformers/pull/19620", "diff_url": "https://github.com/huggingface/transformers/pull/19620.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19620.patch", "merged_at": 1665763817000 }
https://api.github.com/repos/huggingface/transformers/issues/19619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19619/comments
https://api.github.com/repos/huggingface/transformers/issues/19619/events
https://github.com/huggingface/transformers/pull/19619
1,409,324,384
PR_kwDOCUB6oc5A0NTo
19,619
[Doctest] Add configuration_big_bird.py
{ "login": "lappemic", "id": 61876623, "node_id": "MDQ6VXNlcjYxODc2NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lappemic", "html_url": "https://github.com/lappemic", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "organizations_url": "https://api.github.com/users/lappemic/orgs", "repos_url": "https://api.github.com/users/lappemic/repos", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "received_events_url": "https://api.github.com/users/lappemic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is already being fixed in https://github.com/huggingface/transformers/pull/19606", "Hey @Xabilahu ok, i did not see this, sorry. I though do not understand fully, your pull request regards `configuration_bigbird_pegasus.py` and mine `configuration_big_bird.py`. How is it, that these are belong to the same?", "I address both models in my PR. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19619). All of your documentation changes will be reflected on that endpoint.", "just saw it. Sorry, did not pay enough attention!\r\n\r\n", "Hi @lappemic \r\n\r\nStill thank you. If you want to contribute, you can check [documentation_tests.txt](https://github.com/huggingface/transformers/blob/main/utils/documentation_tests.txt) on the main branch and see which ones are still missing :-)" ]
1,665
1,666
1,666
NONE
null
1. Change the import order of the model and configuration classes 2. Add `configuration_big_bird.py` to `utils/documentation_tests.txt` for doctest. Documentation edit according to #19487 @sgugger could you have a look on this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19619/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19619", "html_url": "https://github.com/huggingface/transformers/pull/19619", "diff_url": "https://github.com/huggingface/transformers/pull/19619.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19619.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19618/comments
https://api.github.com/repos/huggingface/transformers/issues/19618/events
https://github.com/huggingface/transformers/pull/19618
1,409,314,989
PR_kwDOCUB6oc5A0LQY
19,618
Type hints MCTCT
{ "login": "rchan26", "id": 44200705, "node_id": "MDQ6VXNlcjQ0MjAwNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/44200705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rchan26", "html_url": "https://github.com/rchan26", "followers_url": "https://api.github.com/users/rchan26/followers", "following_url": "https://api.github.com/users/rchan26/following{/other_user}", "gists_url": "https://api.github.com/users/rchan26/gists{/gist_id}", "starred_url": "https://api.github.com/users/rchan26/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rchan26/subscriptions", "organizations_url": "https://api.github.com/users/rchan26/orgs", "repos_url": "https://api.github.com/users/rchan26/repos", "events_url": "https://api.github.com/users/rchan26/events{/privacy}", "received_events_url": "https://api.github.com/users/rchan26/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ah, careful! `attention_mask` and `head_mask` **were** optional in the main model methods, but not in the encoder. The encoder mostly doesn't have `Optional` arguments, but the other classes do. The changes to the main model methods is why you're getting that error now!\r\n\r\nThis is much easier to see if you look at the 'files changed' interface on GitHub - we want this PR to only add type annotations, but not change any default arguments.", "I see - my apologies! Will make some changes now to fix this :) " ]
1,665
1,666
1,666
CONTRIBUTOR
null
Hi @Rocketknight1: this PR looks to add type hints to MCTCT models and addresses #16059. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19618/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19618", "html_url": "https://github.com/huggingface/transformers/pull/19618", "diff_url": "https://github.com/huggingface/transformers/pull/19618.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19618.patch", "merged_at": 1666012521000 }
https://api.github.com/repos/huggingface/transformers/issues/19617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19617/comments
https://api.github.com/repos/huggingface/transformers/issues/19617/events
https://github.com/huggingface/transformers/pull/19617
1,409,188,495
PR_kwDOCUB6oc5Azv4Z
19,617
[Doctest] Add configuartion_longformer.py
{ "login": "AShreyam", "id": 115744267, "node_id": "U_kgDOBuYeCw", "avatar_url": "https://avatars.githubusercontent.com/u/115744267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AShreyam", "html_url": "https://github.com/AShreyam", "followers_url": "https://api.github.com/users/AShreyam/followers", "following_url": "https://api.github.com/users/AShreyam/following{/other_user}", "gists_url": "https://api.github.com/users/AShreyam/gists{/gist_id}", "starred_url": "https://api.github.com/users/AShreyam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AShreyam/subscriptions", "organizations_url": "https://api.github.com/users/AShreyam/orgs", "repos_url": "https://api.github.com/users/AShreyam/repos", "events_url": "https://api.github.com/users/AShreyam/events{/privacy}", "received_events_url": "https://api.github.com/users/AShreyam/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @AShreyam\r\n\r\nThis PR is not ready to merge. It contains other changes that is not for this config doctest sprint.\r\n", "Hi @AShreyam\r\n\r\nThis PR is not ready to merge. It contains other changes that is not for this config doctest sprint.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,669
1,669
NONE
null
# What does this PR do? Fixes # 19487 Add configuration_longformer.py to utils/documentation_tests.txt for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you take a look at it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19617/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19617", "html_url": "https://github.com/huggingface/transformers/pull/19617", "diff_url": "https://github.com/huggingface/transformers/pull/19617.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19617.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19616/comments
https://api.github.com/repos/huggingface/transformers/issues/19616/events
https://github.com/huggingface/transformers/pull/19616
1,409,184,227
PR_kwDOCUB6oc5Azu9C
19,616
Cast masks to np.unit8 before converting to PIL.Image.Image
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @Narsil, would it be possible to update the inference widgets to include this PR?\r\n\r\nWe were notified on Twitter that the inference widgets were broken: https://twitter.com/levelsio/status/1580573108431646720?t=d9BlnF9Q2nvRaQFNei6KLw&s=19 " ]
1,665
1,665
1,665
COLLABORATOR
null
# What does this PR do? In the recent update to the image segmentation pipeline, the numpy arrays converted to `PIL.Image.Image` with mode `"L"` weren't converted to type `np.uint8`, resulting in corrupted masks. Running the following: ``` import requests from PIL import Image from transformers import pipeline url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) pipe = pipeline("image-segmentation", model="facebook/detr-resnet-50-panoptic") results = pipe(image) ``` **Output masks on `main`:** ![main_0_cat](https://user-images.githubusercontent.com/22614925/195832894-95ef005f-3e0a-4b09-9c10-88539a732d95.png) ![main_1_couch](https://user-images.githubusercontent.com/22614925/195832898-bc40d2fc-a33e-4a26-a97e-380b0a6f37b7.png) ![main_2_remote](https://user-images.githubusercontent.com/22614925/195832899-96cdae09-864f-420b-a902-2da2482d46e4.png) ![main_3_blanket](https://user-images.githubusercontent.com/22614925/195832901-9fa9f65b-089a-4c00-b216-7eabbbcd76a8.png) **Output masks on this branch**: ![fix_0_cat](https://user-images.githubusercontent.com/22614925/195832923-3835a988-e0b0-4f3e-a282-011daa95f4f0.png) ![fix_1_couch](https://user-images.githubusercontent.com/22614925/195832926-b7963092-4648-456d-b863-1820ea619f09.png) ![fix_2_remote](https://user-images.githubusercontent.com/22614925/195832929-f5cc2ca9-cf51-4dca-9954-5bf1500e156e.png) ![fix_3_blanket](https://user-images.githubusercontent.com/22614925/195832932-07ecf096-1c1e-4725-97a6-cf239a83e925.png) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19616/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19616", "html_url": "https://github.com/huggingface/transformers/pull/19616", "diff_url": "https://github.com/huggingface/transformers/pull/19616.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19616.patch", "merged_at": 1665754246000 }
https://api.github.com/repos/huggingface/transformers/issues/19615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19615/comments
https://api.github.com/repos/huggingface/transformers/issues/19615/events
https://github.com/huggingface/transformers/pull/19615
1,409,133,811
PR_kwDOCUB6oc5Azj_H
19,615
Adding -> Configuration_flava.py
{ "login": "AShreyam", "id": 115744267, "node_id": "U_kgDOBuYeCw", "avatar_url": "https://avatars.githubusercontent.com/u/115744267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AShreyam", "html_url": "https://github.com/AShreyam", "followers_url": "https://api.github.com/users/AShreyam/followers", "following_url": "https://api.github.com/users/AShreyam/following{/other_user}", "gists_url": "https://api.github.com/users/AShreyam/gists{/gist_id}", "starred_url": "https://api.github.com/users/AShreyam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AShreyam/subscriptions", "organizations_url": "https://api.github.com/users/AShreyam/orgs", "repos_url": "https://api.github.com/users/AShreyam/repos", "events_url": "https://api.github.com/users/AShreyam/events{/privacy}", "received_events_url": "https://api.github.com/users/AShreyam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AShreyam \r\n\r\nThis model config is done in #19724. Sorry if I forgot to follow this PR earlier. Going to close the PR though.\r\nThank you however." ]
1,665
1,666
1,666
NONE
null
# What does this PR do? Add configuration_flava.py to utils/documentation_tests.txt for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19615/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19615", "html_url": "https://github.com/huggingface/transformers/pull/19615", "diff_url": "https://github.com/huggingface/transformers/pull/19615.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19615.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19614/comments
https://api.github.com/repos/huggingface/transformers/issues/19614/events
https://github.com/huggingface/transformers/pull/19614
1,409,079,790
PR_kwDOCUB6oc5AzYY2
19,614
Add table transformer [v2]
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger I've removed the non-Table Transformer related stuff to #19644 ;)", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger PR is ready, feel free to approve :)" ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? This PR adds [Table Transformer](https://github.com/microsoft/table-transformer) by Microsoft, as a separate model, rather than tweaking the existing DETR implementation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19614", "html_url": "https://github.com/huggingface/transformers/pull/19614", "diff_url": "https://github.com/huggingface/transformers/pull/19614.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19614.patch", "merged_at": 1666099210000 }
https://api.github.com/repos/huggingface/transformers/issues/19613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19613/comments
https://api.github.com/repos/huggingface/transformers/issues/19613/events
https://github.com/huggingface/transformers/pull/19613
1,409,069,999
PR_kwDOCUB6oc5AzWSr
19,613
Add configuration_flava.py
{ "login": "AShreyam", "id": 115744267, "node_id": "U_kgDOBuYeCw", "avatar_url": "https://avatars.githubusercontent.com/u/115744267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AShreyam", "html_url": "https://github.com/AShreyam", "followers_url": "https://api.github.com/users/AShreyam/followers", "following_url": "https://api.github.com/users/AShreyam/following{/other_user}", "gists_url": "https://api.github.com/users/AShreyam/gists{/gist_id}", "starred_url": "https://api.github.com/users/AShreyam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AShreyam/subscriptions", "organizations_url": "https://api.github.com/users/AShreyam/orgs", "repos_url": "https://api.github.com/users/AShreyam/repos", "events_url": "https://api.github.com/users/AShreyam/events{/privacy}", "received_events_url": "https://api.github.com/users/AShreyam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,665
1,665
1,665
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19613/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19613", "html_url": "https://github.com/huggingface/transformers/pull/19613", "diff_url": "https://github.com/huggingface/transformers/pull/19613.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19613.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19612/comments
https://api.github.com/repos/huggingface/transformers/issues/19612/events
https://github.com/huggingface/transformers/pull/19612
1,408,969,358
PR_kwDOCUB6oc5AzA5X
19,612
fix: small error
{ "login": "0xflotus", "id": 26602940, "node_id": "MDQ6VXNlcjI2NjAyOTQw", "avatar_url": "https://avatars.githubusercontent.com/u/26602940?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0xflotus", "html_url": "https://github.com/0xflotus", "followers_url": "https://api.github.com/users/0xflotus/followers", "following_url": "https://api.github.com/users/0xflotus/following{/other_user}", "gists_url": "https://api.github.com/users/0xflotus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0xflotus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0xflotus/subscriptions", "organizations_url": "https://api.github.com/users/0xflotus/orgs", "repos_url": "https://api.github.com/users/0xflotus/repos", "events_url": "https://api.github.com/users/0xflotus/events{/privacy}", "received_events_url": "https://api.github.com/users/0xflotus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
I fixed only a small typo error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19612/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19612", "html_url": "https://github.com/huggingface/transformers/pull/19612", "diff_url": "https://github.com/huggingface/transformers/pull/19612.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19612.patch", "merged_at": 1665760233000 }
https://api.github.com/repos/huggingface/transformers/issues/19611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19611/comments
https://api.github.com/repos/huggingface/transformers/issues/19611/events
https://github.com/huggingface/transformers/pull/19611
1,408,951,542
PR_kwDOCUB6oc5Ay9G9
19,611
[Doctest] Add `configuration_ernie.py`
{ "login": "ztjhz", "id": 59118459, "node_id": "MDQ6VXNlcjU5MTE4NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ztjhz", "html_url": "https://github.com/ztjhz", "followers_url": "https://api.github.com/users/ztjhz/followers", "following_url": "https://api.github.com/users/ztjhz/following{/other_user}", "gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions", "organizations_url": "https://api.github.com/users/ztjhz/orgs", "repos_url": "https://api.github.com/users/ztjhz/repos", "events_url": "https://api.github.com/users/ztjhz/events{/privacy}", "received_events_url": "https://api.github.com/users/ztjhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add configuration_ernie.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19611/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19611", "html_url": "https://github.com/huggingface/transformers/pull/19611", "diff_url": "https://github.com/huggingface/transformers/pull/19611.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19611.patch", "merged_at": 1665759471000 }
https://api.github.com/repos/huggingface/transformers/issues/19610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19610/comments
https://api.github.com/repos/huggingface/transformers/issues/19610/events
https://github.com/huggingface/transformers/pull/19610
1,408,941,316
PR_kwDOCUB6oc5Ay67D
19,610
[Doctest] Add `configuration_xlm_roberta_xl.py`
{ "login": "ztjhz", "id": 59118459, "node_id": "MDQ6VXNlcjU5MTE4NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ztjhz", "html_url": "https://github.com/ztjhz", "followers_url": "https://api.github.com/users/ztjhz/followers", "following_url": "https://api.github.com/users/ztjhz/following{/other_user}", "gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions", "organizations_url": "https://api.github.com/users/ztjhz/orgs", "repos_url": "https://api.github.com/users/ztjhz/repos", "events_url": "https://api.github.com/users/ztjhz/events{/privacy}", "received_events_url": "https://api.github.com/users/ztjhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add configuration_xlm_roberta_xl.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19610/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19610", "html_url": "https://github.com/huggingface/transformers/pull/19610", "diff_url": "https://github.com/huggingface/transformers/pull/19610.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19610.patch", "merged_at": 1665759850000 }
https://api.github.com/repos/huggingface/transformers/issues/19609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19609/comments
https://api.github.com/repos/huggingface/transformers/issues/19609/events
https://github.com/huggingface/transformers/pull/19609
1,408,940,118
PR_kwDOCUB6oc5Ay6rM
19,609
[Doctest] Add `configuration_xlm_roberta.py`
{ "login": "ztjhz", "id": 59118459, "node_id": "MDQ6VXNlcjU5MTE4NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ztjhz", "html_url": "https://github.com/ztjhz", "followers_url": "https://api.github.com/users/ztjhz/followers", "following_url": "https://api.github.com/users/ztjhz/following{/other_user}", "gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions", "organizations_url": "https://api.github.com/users/ztjhz/orgs", "repos_url": "https://api.github.com/users/ztjhz/repos", "events_url": "https://api.github.com/users/ztjhz/events{/privacy}", "received_events_url": "https://api.github.com/users/ztjhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,665
1,665
1,665
CONTRIBUTOR
null
Add configuration_xlm_roberta.pyto utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19609/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19609", "html_url": "https://github.com/huggingface/transformers/pull/19609", "diff_url": "https://github.com/huggingface/transformers/pull/19609.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19609.patch", "merged_at": 1665758919000 }
https://api.github.com/repos/huggingface/transformers/issues/19608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19608/comments
https://api.github.com/repos/huggingface/transformers/issues/19608/events
https://github.com/huggingface/transformers/pull/19608
1,408,922,652
PR_kwDOCUB6oc5Ay2-Q
19,608
Fix whisper doc
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, I missed that 😅 ", "> Thanks, I missed that 😅\r\n\r\nMe too, I am bad guy!" ]
1,665
1,665
1,665
COLLABORATOR
null
# What does this PR do? Fixes whisper doc-test. I used `add_code_sample_docstrings` but didn't properly check that it does not support whisper model (which has to be given `input_ids`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19608/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19608", "html_url": "https://github.com/huggingface/transformers/pull/19608", "diff_url": "https://github.com/huggingface/transformers/pull/19608.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19608.patch", "merged_at": 1665763952000 }
https://api.github.com/repos/huggingface/transformers/issues/19607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19607/comments
https://api.github.com/repos/huggingface/transformers/issues/19607/events
https://github.com/huggingface/transformers/pull/19607
1,408,912,272
PR_kwDOCUB6oc5Ay0yd
19,607
[Time Series Transformer] Add doc tests
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "LGTM! thanks!", "@ydshieh always better to learn more than earn more", "@NielsRogge for generation you can also use the other test batch if you like" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? This PR improves the code snippets in the docs of Time Series Transformer and makes sure they are tested.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19607/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/19607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19607", "html_url": "https://github.com/huggingface/transformers/pull/19607", "diff_url": "https://github.com/huggingface/transformers/pull/19607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19607.patch", "merged_at": 1665755823000 }
https://api.github.com/repos/huggingface/transformers/issues/19606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19606/comments
https://api.github.com/repos/huggingface/transformers/issues/19606/events
https://github.com/huggingface/transformers/pull/19606
1,408,903,813
PR_kwDOCUB6oc5Ayy_9
19,606
[Doctest] Add `configuration_bigbird_pegasus.py` and `configuration_big_bird.py`
{ "login": "Xabilahu", "id": 13916396, "node_id": "MDQ6VXNlcjEzOTE2Mzk2", "avatar_url": "https://avatars.githubusercontent.com/u/13916396?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Xabilahu", "html_url": "https://github.com/Xabilahu", "followers_url": "https://api.github.com/users/Xabilahu/followers", "following_url": "https://api.github.com/users/Xabilahu/following{/other_user}", "gists_url": "https://api.github.com/users/Xabilahu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Xabilahu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Xabilahu/subscriptions", "organizations_url": "https://api.github.com/users/Xabilahu/orgs", "repos_url": "https://api.github.com/users/Xabilahu/repos", "events_url": "https://api.github.com/users/Xabilahu/events{/privacy}", "received_events_url": "https://api.github.com/users/Xabilahu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the amazing work you guys do <3" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_bigbird_pegasus.py` and `configuration_big_bird.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @ydshieh could you please check it? Thank you :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19606", "html_url": "https://github.com/huggingface/transformers/pull/19606", "diff_url": "https://github.com/huggingface/transformers/pull/19606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19606.patch", "merged_at": 1665753456000 }
https://api.github.com/repos/huggingface/transformers/issues/19605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19605/comments
https://api.github.com/repos/huggingface/transformers/issues/19605/events
https://github.com/huggingface/transformers/pull/19605
1,408,843,827
PR_kwDOCUB6oc5AymWM
19,605
[Doctest] Add `configuration_visual_bert.py`
{ "login": "ztjhz", "id": 59118459, "node_id": "MDQ6VXNlcjU5MTE4NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/59118459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ztjhz", "html_url": "https://github.com/ztjhz", "followers_url": "https://api.github.com/users/ztjhz/followers", "following_url": "https://api.github.com/users/ztjhz/following{/other_user}", "gists_url": "https://api.github.com/users/ztjhz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ztjhz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ztjhz/subscriptions", "organizations_url": "https://api.github.com/users/ztjhz/orgs", "repos_url": "https://api.github.com/users/ztjhz/repos", "events_url": "https://api.github.com/users/ztjhz/events{/privacy}", "received_events_url": "https://api.github.com/users/ztjhz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add configuration_visual_bert.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger @ydshieh Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19605/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19605", "html_url": "https://github.com/huggingface/transformers/pull/19605", "diff_url": "https://github.com/huggingface/transformers/pull/19605.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19605.patch", "merged_at": 1665758737000 }
https://api.github.com/repos/huggingface/transformers/issues/19604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19604/comments
https://api.github.com/repos/huggingface/transformers/issues/19604/events
https://github.com/huggingface/transformers/issues/19604
1,408,723,677
I_kwDOCUB6oc5T92rd
19,604
ONNX conversion from VisionEncoderDecoderModel?
{ "login": "kangsan0420", "id": 79678563, "node_id": "MDQ6VXNlcjc5Njc4NTYz", "avatar_url": "https://avatars.githubusercontent.com/u/79678563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kangsan0420", "html_url": "https://github.com/kangsan0420", "followers_url": "https://api.github.com/users/kangsan0420/followers", "following_url": "https://api.github.com/users/kangsan0420/following{/other_user}", "gists_url": "https://api.github.com/users/kangsan0420/gists{/gist_id}", "starred_url": "https://api.github.com/users/kangsan0420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kangsan0420/subscriptions", "organizations_url": "https://api.github.com/users/kangsan0420/orgs", "repos_url": "https://api.github.com/users/kangsan0420/repos", "events_url": "https://api.github.com/users/kangsan0420/events{/privacy}", "received_events_url": "https://api.github.com/users/kangsan0420/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\n1) did you use Transformers from the main branch?\r\n2) you should probably do the following\r\n\r\n```\r\nmodel_ckpt = \"path_to_your_checkpoint\"\r\n!python -m transformers.onnx --model={model_ckpt} --feature=vision2seq-lm onnx/ --atol 1e-3\r\n```", "@NielsRogge ` --feature=vision2seq-lm` worked for me. Thank you!", "@NielsRogge . I would like to get the inference script after onnx conversion of VisionEncoderDecoder model. Any suggestion please?\r\n", "@kangsan0420 looking at \r\n```\r\nmodel_ckpt = \"path_to_your_checkpoint\"\r\n!python -m transformers.onnx --model={model_ckpt} --feature=vision2seq-lm onnx/ --atol 1e-3\r\n```\r\nif downloading from the pertained trocr model, where is the path to the checkpoint?", "@NielsRogge \r\nI have fine tuned the trocr small printed model on a custom single line text dataset. After training I converted the model to onnx format using the following [PR](https://github.com/huggingface/transformers/pull/19254#issue-1392234601). Converting the model to onnx results in drastic decrease in accuracy. \r\n\r\nAlso, if I use the pretrained trocr small printed model and convert the same to onnx using exactly same procedure, there is a very small change in accuracy.\r\n\r\nCan someone explain why is there so much change in accuracy.\r\n\r\nPlease reply if you need additional information.\r\nThanks.", "hello guys, exist an onnx model for TrOcr \r\nsomeone answer me please\r\nand thank you", "> @NielsRogge . I would like to get the inference script after onnx conversion of VisionEncoderDecoder model. Any suggestion please?\r\n\r\n@NielsRogge thanks I have been able to convert my Donut model to ONNX format. Any idea how I can proceed to perform inference for the onnx model ?", "@Mir-Umar @Kamilya2020 Please open issues in the Optimum repository: https://github.com/huggingface/optimum" ]
1,665
1,688
1,665
NONE
null
# Description I'd like to convert a VisionEncoderDecoder model to ONNX using the feature that has been recently merged #19254 However, It reproduces errors as below. What am I missing? # Environment ```python import transformers import torch import sys !echo "OS: $(cat /etc/issue)" !echo "Arch.: $(arch)" print('python:', sys.version) print('transformers:', transformers.__version__) print('torch', torch.__version__) ``` OS: Ubuntu 20.04.4 LTS \n \l Arch.: x86_64 python: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] transformers: 4.23.1 torch 1.12.1+cu102 # Reproduce ## Model Loading ```python model = torch.load('221002_203253.pt') # TrOCR model that I have trained. model.save_pretrained('trocr') # To show they're same type. model2 = transformers.VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") type(model), type(model2) ``` Output: <div class="lm-Widget p-Widget jp-OutputArea jp-Cell-outputArea" style=""><div class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"><div class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"></div><div class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr"><pre>Some weights of VisionEncoderDecoderModel were not initialized from the model checkpoint at microsoft/trocr-base-handwritten and are newly initialized: ['encoder.pooler.dense.bias', 'encoder.pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. </pre></div></div><div class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child jp-OutputArea-executeResult"><div class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="text/plain"><pre>(transformers.models.vision_encoder_decoder.modeling_vision_encoder_decoder.VisionEncoderDecoderModel, transformers.models.vision_encoder_decoder.modeling_vision_encoder_decoder.VisionEncoderDecoderModel)</pre></div></div></div> ## Check Support ```python try: transformers.onnx.FeaturesManager.check_supported_model_or_raise(model) except Exception as e: print(type(e), e) print('='*100) try: transformers.onnx.FeaturesManager.check_supported_model_or_raise(model2) except Exception as e: print(type(e), e) ``` Output: <pre>&lt;class 'ValueError'&gt; vision-encoder-decoder doesn't support feature default. Supported values are: {'vision2seq-lm': functools.partial(&lt;bound method OnnxConfig.from_model_config of &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderOnnxConfig'&gt;&gt;, task='vision2seq-lm')} ==================================================================================================== &lt;class 'ValueError'&gt; vision-encoder-decoder doesn't support feature default. Supported values are: {'vision2seq-lm': functools.partial(&lt;bound method OnnxConfig.from_model_config of &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderOnnxConfig'&gt;&gt;, task='vision2seq-lm')} </pre> ## Conversion to ONNX ```shell python3 -m transformers.onnx -m trocr onnx/ ``` Output: <pre>Local PyTorch model found. Framework not requested. Using torch to export to ONNX. Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 180, in &lt;module&gt; main() File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/__main__.py", line 72, in main model = FeaturesManager.get_model_from_feature( File "/usr/local/lib/python3.8/dist-packages/transformers/onnx/features.py", line 666, in get_model_from_feature model = model_class.from_pretrained(model, cache_dir=cache_dir) File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 466, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class &lt;class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'&gt; for this kind of AutoModel: AutoModel. Model type should be one of AlbertConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CanineConfig, CLIPConfig, CodeGenConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, CTRLConfig, CvtConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeformableDetrConfig, DeiTConfig, DetrConfig, DistilBertConfig, DonutSwinConfig, DPRConfig, DPTConfig, ElectraConfig, ErnieConfig, EsmConfig, FlaubertConfig, FlavaConfig, FNetConfig, FSMTConfig, FunnelConfig, GLPNConfig, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GroupViTConfig, HubertConfig, IBertConfig, ImageGPTConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MarkupLMConfig, MaskFormerConfig, MBartConfig, MCTCTConfig, MegatronBertConfig, MobileBertConfig, MobileViTConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NystromformerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RoFormerConfig, SegformerConfig, SEWConfig, SEWDConfig, Speech2TextConfig, SplinterConfig, SqueezeBertConfig, SwinConfig, Swinv2Config, T5Config, TapasConfig, TimeSeriesTransformerConfig, TrajectoryTransformerConfig, TransfoXLConfig, UniSpeechConfig, UniSpeechSatConfig, VanConfig, VideoMAEConfig, ViltConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTMAEConfig, ViTMSNConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, YolosConfig, YosoConfig. </pre>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19604/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19603/comments
https://api.github.com/repos/huggingface/transformers/issues/19603/events
https://github.com/huggingface/transformers/issues/19603
1,408,694,266
I_kwDOCUB6oc5T9vf6
19,603
Flax `.from_pretrained` fails to use `subfolder`
{ "login": "keturn", "id": 83819, "node_id": "MDQ6VXNlcjgzODE5", "avatar_url": "https://avatars.githubusercontent.com/u/83819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keturn", "html_url": "https://github.com/keturn", "followers_url": "https://api.github.com/users/keturn/followers", "following_url": "https://api.github.com/users/keturn/following{/other_user}", "gists_url": "https://api.github.com/users/keturn/gists{/gist_id}", "starred_url": "https://api.github.com/users/keturn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keturn/subscriptions", "organizations_url": "https://api.github.com/users/keturn/orgs", "repos_url": "https://api.github.com/users/keturn/repos", "events_url": "https://api.github.com/users/keturn/events{/privacy}", "received_events_url": "https://api.github.com/users/keturn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "note that this _does_ work with the non-Flax CLIPTextModel.", "cc @sanchit-gandhi ", "Hey @keturn! The command `transformers-cli env` is failing as you don't have `datasets` installed. You can install `datasets` through:\r\n```\r\npip install datasets\r\n```\r\nor from main: https://github.com/huggingface/datasets\r\n\r\nWith regards to the Flax `.from_pretrained()` method failing with `subfolder`, the PR you've mentioned implemented the `subfolder` feature for PyTorch but not Flax! Would you like to have a go at implementing this feature in Flax? The PR can largely follow the changes made in https://github.com/huggingface/transformers/pull/18184.", "Hey @keturn, just following up here! Let me know if you'd be keen to open a PR - I can help you with with pointers and any questions you might have! Otherwise I can take look next week 🤗", "oh, I don't expect to get to this anytime soon myself. ", "Hey @keturn - have added it to my TODOs!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,674
1,674
CONTRIBUTOR
null
### System Info - transformers 4.23.1 - Ubuntu 22.04 - jax 0.3.23 - huggingface_hub 10.0.1 `!transformers-cli env`: <details> <summary>ModuleNotFoundError: No module named 'datasets'</summary> ``` Traceback (most recent call last): File "🏠/venv.lab/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "🏠/venv.lab/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module> from .pt_to_tf import PTtoTFCommand File "🏠/venv.lab/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module> from datasets import load_dataset ModuleNotFoundError: No module named 'datasets' ``` </details> ### Who can help? @patil-suraj for Flax and also CLIP ### Information - My own modified scripts ### Reproduction ```py from transformers import FlaxCLIPTextModel text_encoder = FlaxCLIPTextModel.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", subfolder="text_encoder", ) ``` fails with > HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/flax/config.json > OSError: CompVis/stable-diffusion-v1-4 does not appear to have a file named config.json. Checkout 'https://huggingface.co/CompVis/stable-diffusion-v1-4/flax' for available files. ### Expected behavior a FlaxCLIPTextModel is created from https://huggingface.co/CompVis/stable-diffusion-v1-4/blob/flax/text_encoder/config.json related: #18184
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19603/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19602/comments
https://api.github.com/repos/huggingface/transformers/issues/19602/events
https://github.com/huggingface/transformers/issues/19602
1,408,643,702
I_kwDOCUB6oc5T9jJ2
19,602
Documentation and implementation are inconsistent for forced_decoder_ids option in GenerationMixin.generate
{ "login": "koreyou", "id": 5196226, "node_id": "MDQ6VXNlcjUxOTYyMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5196226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koreyou", "html_url": "https://github.com/koreyou", "followers_url": "https://api.github.com/users/koreyou/followers", "following_url": "https://api.github.com/users/koreyou/following{/other_user}", "gists_url": "https://api.github.com/users/koreyou/gists{/gist_id}", "starred_url": "https://api.github.com/users/koreyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koreyou/subscriptions", "organizations_url": "https://api.github.com/users/koreyou/orgs", "repos_url": "https://api.github.com/users/koreyou/repos", "events_url": "https://api.github.com/users/koreyou/events{/privacy}", "received_events_url": "https://api.github.com/users/koreyou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @koreyou 👋 The documentation is indeed incorrect -- It accepts a list of pairs integers (`List[List[int]]`) that can be convertible to a `Dict[int, int]`, containing the index and the token to be forced, correspondingly (e.g. [this list of lists](https://huggingface.co/openai/whisper-large/blob/main/config.json#L23)). \r\n\r\nWould you like to open a PR to fix the documentation? 🤗 \r\n\r\n(cc @ArthurZucker @patrickvonplaten)" ]
1,665
1,666
1,666
CONTRIBUTOR
null
### System Info - `transformers` version: 4.23.0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? Text generation: @patrickvonplaten, @Narsil, @gante Documentation: @sgugger, @stevhliu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('t5-small') model = AutoModelForSeq2SeqLM.from_pretrained('t5-small') input = 'This is a dummy input.' decoder_start_text = 'But is should still work, because' input_ids = tokenizer.encode(input, return_tensors='pt') decoder_start_ids = tokenizer.encode(decoder_start_text, add_special_tokens=False) # This raises an error as attached below outputs = model.generate( input_ids, forced_decoder_ids=decoder_start_ids ) # This is against the documentation but works outputs = model.generate( input_ids, forced_decoder_ids={i: id for i, id in enumerate(decoder_start_ids)} ) ``` ### Expected behavior According to [the documentation](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_utils.py#L1124-L1125), `GeneratorMixin.generate` accepts a list of int for `forced_decoder_ids `. However, above reproduction raises the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [10], in <cell line: 1>() ----> 1 outputs = model.generate( 2 input_ids, 3 forced_decoder_ids=decoder_start_ids 4 ) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_utils.py:1353, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, suppress_tokens, begin_suppress_tokens, forced_decoder_ids, **model_kwargs) 1348 raise ValueError( 1349 "Diverse beam search cannot be used in sampling mode. Make sure that `do_sample` is set to `False`." 1350 ) 1352 # 7. prepare distribution pre_processing samplers -> 1353 logits_processor = self._get_logits_processor( 1354 repetition_penalty=repetition_penalty, 1355 no_repeat_ngram_size=no_repeat_ngram_size, 1356 encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size, 1357 input_ids_seq_length=input_ids_seq_length, 1358 encoder_input_ids=inputs_tensor, 1359 bad_words_ids=bad_words_ids, 1360 min_length=min_length, 1361 max_length=max_length, 1362 eos_token_id=eos_token_id, 1363 forced_bos_token_id=forced_bos_token_id, 1364 forced_eos_token_id=forced_eos_token_id, 1365 prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, 1366 num_beams=num_beams, 1367 num_beam_groups=num_beam_groups, 1368 diversity_penalty=diversity_penalty, 1369 remove_invalid_values=remove_invalid_values, 1370 exponential_decay_length_penalty=exponential_decay_length_penalty, 1371 logits_processor=logits_processor, 1372 renormalize_logits=renormalize_logits, 1373 suppress_tokens=suppress_tokens, 1374 begin_suppress_tokens=begin_suppress_tokens, 1375 forced_decoder_ids=forced_decoder_ids, 1376 ) 1378 # 8. prepare stopping criteria 1379 stopping_criteria = self._get_stopping_criteria( 1380 max_length=max_length, max_time=max_time, stopping_criteria=stopping_criteria 1381 ) File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_utils.py:786, in GenerationMixin._get_logits_processor(self, repetition_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, input_ids_seq_length, encoder_input_ids, bad_words_ids, min_length, max_length, eos_token_id, forced_bos_token_id, forced_eos_token_id, prefix_allowed_tokens_fn, num_beams, num_beam_groups, diversity_penalty, remove_invalid_values, exponential_decay_length_penalty, logits_processor, renormalize_logits, suppress_tokens, begin_suppress_tokens, forced_decoder_ids) 784 processors.append(SuppressTokensAtBeginLogitsProcessor(begin_suppress_tokens, begin_index)) 785 if forced_decoder_ids is not None: --> 786 processors.append(ForceTokensLogitsProcessor(forced_decoder_ids)) 787 processors = self._merge_criteria_processor_list(processors, logits_processor) 788 # `LogitNormalization` should always be the last logit processor, when present File ~/.pyenv/versions/3.9.13/envs/dummy_proj/lib/python3.9/site-packages/transformers/generation_logits_process.py:742, in ForceTokensLogitsProcessor.__init__(self, force_token_map) 741 def __init__(self, force_token_map): --> 742 self.force_token_map = dict(force_token_map) ``` It is clear that implementation is expecting `Dict[int, str] `as shown in [here](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_logits_process.py#L741-L742). Hence I believe that implementation and documentation are inconsistent. FYI, [other functions in `GeneratorMixin`](https://github.com/huggingface/transformers/blob/3d320c78c32334f66d72d57ff6322d9e3a7dc00b/src/transformers/generation_utils.py#L782-L783) seems to expect `List[int]` as in the documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19602/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19601/comments
https://api.github.com/repos/huggingface/transformers/issues/19601/events
https://github.com/huggingface/transformers/issues/19601
1,408,618,343
I_kwDOCUB6oc5T9c9n
19,601
Image Format (BGR/RGB) bug for lxmert example
{ "login": "lvyiwei1", "id": 32001651, "node_id": "MDQ6VXNlcjMyMDAxNjUx", "avatar_url": "https://avatars.githubusercontent.com/u/32001651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvyiwei1", "html_url": "https://github.com/lvyiwei1", "followers_url": "https://api.github.com/users/lvyiwei1/followers", "following_url": "https://api.github.com/users/lvyiwei1/following{/other_user}", "gists_url": "https://api.github.com/users/lvyiwei1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvyiwei1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvyiwei1/subscriptions", "organizations_url": "https://api.github.com/users/lvyiwei1/orgs", "repos_url": "https://api.github.com/users/lvyiwei1/repos", "events_url": "https://api.github.com/users/lvyiwei1/events{/privacy}", "received_events_url": "https://api.github.com/users/lvyiwei1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that we do not maintain the research project examples, so you will have better luck pinging the original author :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,669
1,669
NONE
null
### System Info Repository code (accessed 10/13/2022) in examples/research_projects/lxmert ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run demo.ipynb in examples/research_project/lxmert (or instead use [this colab notebook](https://colab.research.google.com/drive/1N0z-mplcu-20TZPr7-TNkZUCASmmP90i?usp=sharing)) but instead of using an URL, upload arbitrary jpg image and do local file (i.e. change line `frcnn_visualizer = SingleImageViz(URL,id2obj=objids, id2attr=attrids)` to `frcnn_visualizer = SingleImageViz('pic.jpg',id2obj=objids, id2attr=attrids)` where `pic.jpg` is arbitrary jpg file you have) Then the result will have flipped red and blue colors ### Expected behavior The result have flipped red and blue colors. Technically, the image inputed to frcnn is RGB instead of BGR (but frcnn uses BGR), so this is probably due to doing BGR2RGB one extra time in the image preprocessing step. ### Source This bug was discovered through [MultiViz paper](https://arxiv.org/pdf/2207.00056.pdf)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19601/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19600/comments
https://api.github.com/repos/huggingface/transformers/issues/19600/events
https://github.com/huggingface/transformers/pull/19600
1,408,513,046
PR_kwDOCUB6oc5AxhUv
19,600
fix doc test for megatron bert
{ "login": "RamitPahwa", "id": 16895131, "node_id": "MDQ6VXNlcjE2ODk1MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16895131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RamitPahwa", "html_url": "https://github.com/RamitPahwa", "followers_url": "https://api.github.com/users/RamitPahwa/followers", "following_url": "https://api.github.com/users/RamitPahwa/following{/other_user}", "gists_url": "https://api.github.com/users/RamitPahwa/gists{/gist_id}", "starred_url": "https://api.github.com/users/RamitPahwa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RamitPahwa/subscriptions", "organizations_url": "https://api.github.com/users/RamitPahwa/orgs", "repos_url": "https://api.github.com/users/RamitPahwa/repos", "events_url": "https://api.github.com/users/RamitPahwa/events{/privacy}", "received_events_url": "https://api.github.com/users/RamitPahwa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Add configuration_megatron_bert.py to utils/documentation_tests.txt for doctest. Based on issue #19487 @sgugger / @ydshieh <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19600/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19600", "html_url": "https://github.com/huggingface/transformers/pull/19600", "diff_url": "https://github.com/huggingface/transformers/pull/19600.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19600.patch", "merged_at": 1665742136000 }
https://api.github.com/repos/huggingface/transformers/issues/19599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19599/comments
https://api.github.com/repos/huggingface/transformers/issues/19599/events
https://github.com/huggingface/transformers/issues/19599
1,408,467,412
I_kwDOCUB6oc5T84HU
19,599
Collator only gets keys from the dataset which are inputs to the model
{ "login": "julian-tonita", "id": 106684552, "node_id": "U_kgDOBlvgiA", "avatar_url": "https://avatars.githubusercontent.com/u/106684552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julian-tonita", "html_url": "https://github.com/julian-tonita", "followers_url": "https://api.github.com/users/julian-tonita/followers", "following_url": "https://api.github.com/users/julian-tonita/following{/other_user}", "gists_url": "https://api.github.com/users/julian-tonita/gists{/gist_id}", "starred_url": "https://api.github.com/users/julian-tonita/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julian-tonita/subscriptions", "organizations_url": "https://api.github.com/users/julian-tonita/orgs", "repos_url": "https://api.github.com/users/julian-tonita/repos", "events_url": "https://api.github.com/users/julian-tonita/events{/privacy}", "received_events_url": "https://api.github.com/users/julian-tonita/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can just the option `remove_unused_keys=False` from your training arguments in this case.", "Okay, thanks! I looked through before and didn't see anything, actually CTRL-F'd `key` but there was nothing. I believe it has actually been renamed `remove_unused_columns` which is why I didn't find it. Thanks for the help again! ", "Ah yes, sorry about the wrong name!", "No worries, thanks for the help." ]
1,665
1,665
1,665
NONE
null
### System Info - `transformers` version: 4.21.3 - Platform: macOS-12.1-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.2 (False) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: - Use a dataset with certain keys expected by a custom collator but not the model inputs - Pass the dataset and custom collator to the huggingface trainer - Run training and the collator will not be passed the correct keys ### Expected behavior Huggingface Trainer automatically removes keys from the dataset which aren't needed by the model, but doesn't allow for the possibility that the collator might take different inputs than the model. This is unexpected as the collator is passed the data prior to the model so when stripping unused keys it should be done after the collator not prior.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19599/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19598/comments
https://api.github.com/repos/huggingface/transformers/issues/19598/events
https://github.com/huggingface/transformers/pull/19598
1,408,382,031
PR_kwDOCUB6oc5AxE5L
19,598
[Doctest] Add `configuration_sew_d.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_sew_d.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thank you :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19598/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19598", "html_url": "https://github.com/huggingface/transformers/pull/19598", "diff_url": "https://github.com/huggingface/transformers/pull/19598.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19598.patch", "merged_at": 1665742052000 }
https://api.github.com/repos/huggingface/transformers/issues/19597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19597/comments
https://api.github.com/repos/huggingface/transformers/issues/19597/events
https://github.com/huggingface/transformers/pull/19597
1,408,379,463
PR_kwDOCUB6oc5AxETp
19,597
[Doctest] Add `configuration_sew.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_sew.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please take a look at it? Thank you :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19597/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19597", "html_url": "https://github.com/huggingface/transformers/pull/19597", "diff_url": "https://github.com/huggingface/transformers/pull/19597.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19597.patch", "merged_at": 1665740849000 }
https://api.github.com/repos/huggingface/transformers/issues/19596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19596/comments
https://api.github.com/repos/huggingface/transformers/issues/19596/events
https://github.com/huggingface/transformers/pull/19596
1,408,378,557
PR_kwDOCUB6oc5AxEHb
19,596
[Doctest] Add `configuration_unispeech.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_unispeech.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you please check it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19596", "html_url": "https://github.com/huggingface/transformers/pull/19596", "diff_url": "https://github.com/huggingface/transformers/pull/19596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19596.patch", "merged_at": 1665741815000 }
https://api.github.com/repos/huggingface/transformers/issues/19595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19595/comments
https://api.github.com/repos/huggingface/transformers/issues/19595/events
https://github.com/huggingface/transformers/pull/19595
1,408,377,258
PR_kwDOCUB6oc5AxD1z
19,595
[Doctest] Add `configuration_swinv2.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_swinv2.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you check it? Thank you =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19595", "html_url": "https://github.com/huggingface/transformers/pull/19595", "diff_url": "https://github.com/huggingface/transformers/pull/19595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19595.patch", "merged_at": 1665738998000 }
https://api.github.com/repos/huggingface/transformers/issues/19594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19594/comments
https://api.github.com/repos/huggingface/transformers/issues/19594/events
https://github.com/huggingface/transformers/pull/19594
1,408,376,239
PR_kwDOCUB6oc5AxDnj
19,594
[Doctest] Add `configuration_swin.py`
{ "login": "daspartho", "id": 59410571, "node_id": "MDQ6VXNlcjU5NDEwNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/59410571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daspartho", "html_url": "https://github.com/daspartho", "followers_url": "https://api.github.com/users/daspartho/followers", "following_url": "https://api.github.com/users/daspartho/following{/other_user}", "gists_url": "https://api.github.com/users/daspartho/gists{/gist_id}", "starred_url": "https://api.github.com/users/daspartho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daspartho/subscriptions", "organizations_url": "https://api.github.com/users/daspartho/orgs", "repos_url": "https://api.github.com/users/daspartho/repos", "events_url": "https://api.github.com/users/daspartho/events{/privacy}", "received_events_url": "https://api.github.com/users/daspartho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
Add `configuration_swin.py` to `utils/documentation_tests.txt` for doctest. Based on issue #19487 @ydshieh could you take a look at it? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19594/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19594", "html_url": "https://github.com/huggingface/transformers/pull/19594", "diff_url": "https://github.com/huggingface/transformers/pull/19594.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19594.patch", "merged_at": 1665740257000 }
https://api.github.com/repos/huggingface/transformers/issues/19593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19593/comments
https://api.github.com/repos/huggingface/transformers/issues/19593/events
https://github.com/huggingface/transformers/issues/19593
1,408,349,492
I_kwDOCUB6oc5T8bU0
19,593
The 54b MoE!
{ "login": "alzyras", "id": 72341658, "node_id": "MDQ6VXNlcjcyMzQxNjU4", "avatar_url": "https://avatars.githubusercontent.com/u/72341658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alzyras", "html_url": "https://github.com/alzyras", "followers_url": "https://api.github.com/users/alzyras/followers", "following_url": "https://api.github.com/users/alzyras/following{/other_user}", "gists_url": "https://api.github.com/users/alzyras/gists{/gist_id}", "starred_url": "https://api.github.com/users/alzyras/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alzyras/subscriptions", "organizations_url": "https://api.github.com/users/alzyras/orgs", "repos_url": "https://api.github.com/users/alzyras/repos", "events_url": "https://api.github.com/users/alzyras/events{/privacy}", "received_events_url": "https://api.github.com/users/alzyras/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @ArthurZucker and @younesbelkada who have been working on contributing the Switch Transformer, an MoE, to `transformers`. I agree adding the 54B NLLB model would be quite cool too!", "also personally very interested in support for large MoEs", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Any news about MoE?", "[Switch Transformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers) has been added to the library." ]
1,665
1,672
1,669
NONE
null
### System Info Hello, I'm (And I believe many others) are intrigues by the Model of Experts model. I've looked at the only documentation I could find here: https://github.com/facebookresearch/fairseq/blob/nllb/examples/nllb/modeling/README.md However the arguments for generation / evaluation are unclear to me :) I will be starting a data analysis job shortly and I see some possible applications for the 54b model. Surely there are the other models, but I believe many enthusiasts are looking forward to trying translations with this model. I'm just looking for starting arguments for translating, i.e. eng to de. I saw the model is coming to huggingface at some point so definitely looking forward to that. I am also interested in running real person evaluations of 3.3b model, 54b MoE model, Deepl and others to see how far the models have come :) Thank you so much for your work. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - ### Expected behavior -
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19593/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19592/comments
https://api.github.com/repos/huggingface/transformers/issues/19592/events
https://github.com/huggingface/transformers/issues/19592
1,408,314,522
I_kwDOCUB6oc5T8Sya
19,592
Sagemaker Estimator for fine tuning where all the transform code is in the train.py
{ "login": "j2cunningham", "id": 4729280, "node_id": "MDQ6VXNlcjQ3MjkyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/4729280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j2cunningham", "html_url": "https://github.com/j2cunningham", "followers_url": "https://api.github.com/users/j2cunningham/followers", "following_url": "https://api.github.com/users/j2cunningham/following{/other_user}", "gists_url": "https://api.github.com/users/j2cunningham/gists{/gist_id}", "starred_url": "https://api.github.com/users/j2cunningham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j2cunningham/subscriptions", "organizations_url": "https://api.github.com/users/j2cunningham/orgs", "repos_url": "https://api.github.com/users/j2cunningham/repos", "events_url": "https://api.github.com/users/j2cunningham/events{/privacy}", "received_events_url": "https://api.github.com/users/j2cunningham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "WDYT @philschmid @sgugger ?", "Hello @j2cunningham, \r\n\r\nThank you for all of the information and it is super cool to hear that you are using SageMaker! We have over 20 examples for how to use transformers with SageMaker for inference and training: https://github.com/huggingface/notebooks/tree/main/sagemaker\r\n\r\nIn there should be examples of how to use a CSV file directly for [batch transform](https://github.com/huggingface/notebooks/blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb), and we also ran a whole [workshop series last year](https://github.com/philschmid/huggingface-sagemaker-workshop-series), where you have example of the doing the processing in the [train.py](https://github.com/philschmid/huggingface-sagemaker-workshop-series/blob/main/workshop_4_distillation_and_acceleration/scripts/train.py)\r\n\r\nRegarding your struggle with loading CSV compared to regular datasets, this quite easy. Instead of providing the huggingface hub id you can use `csv` and then provide the path to your files. This will then created a dataset you can seamlessly use with the examples: [Documentation](https://huggingface.co/docs/datasets/v2.6.0/en/loading#csv)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\"\r\n```\r\n\r\nFor more SageMaker related question or ideas please use the forum next time: https://discuss.huggingface.co/c/sagemaker/17\r\n", "I totally get how to use load_dataset with csv data now. My observation is that there isn't an example that starts with just plain csv data, explains how to use load_dataset and why and then does some fine tuning. The examples I found would all start with nicely curated huggingface datasets, do a save to disk (not explain why) and then do read from disk in the train.py. I had to find snippets and documentation that used load_dataset for csv, snippets that explained what save_to_disk was doing and what arrow was, snippets that explained that you could do transformation in the notebook or in the train.py and then wrap it all up into working code. I just feel like starting from raw csv data or image data and doing most of the work in the train.py and not the notebook is pretty common pattern. There very well could be the perfect example from HF that I couldn't find or I could be off the mark when I think this is a common pattern outside of my company. I got this all working and am just offering to share my notebook and train.py with the community. I know I could do a medium article, but thought an example on the HF git would be most beneficial. Thanks", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,665
1,669
1,669
NONE
null
### Feature request I work for a company that is a heavy user of AWS sagemaker. I am on a professional services team where I build a lot of examples for our data scientists to follow. I recently wanted to use the Sagemaker Huggingface estimator to fine tune a transformer and create a model for our custom NLP task. I had csv data in S3. I found several examples of fine tuning that involved pulling nicely curated datasets from HF hub down to the SM notebook and then transforming it into arrow with `save_to_disk` and pushing it to S3 as a dataset that could be read in the train.py file. I struggled mightily to find an example and never found a good example of how to start with just CSV files, use the HF existing tools load the data and then pass it to the estimator. Furthermore, the examples I find have the user pulling the data over to the notebook and doing the conversion to arrow there. That seems inefficient when the point of an estimator is to utilize a small instance to host your notebook and a large instance to do the work. If I had a large amount of data to to convert to arrow and I followed the given examples, I would need a large notebook instance and a large estimator instance. I wrote an example that puts all the transform code in the train.py and only invokes it from the notebook. In my train.py, I use load_dataset with the csv script to transform the data to arrow and do the save and load there. I wanted to use the arrow format for efficiency. I propose that I update your documentation with this unique example. ### Motivation I feel that the proposed documentation is unifies several previously documented concepts into a single, useful example. ### Your contribution I would be happy to build the example and have you guys approve it. I have never contributed to HF before, so I would need a bit of guidance to get started.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19592/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19591/comments
https://api.github.com/repos/huggingface/transformers/issues/19591/events
https://github.com/huggingface/transformers/issues/19591
1,408,307,487
I_kwDOCUB6oc5T8REf
19,591
Beam search indices calculation issue
{ "login": "woshizouguo", "id": 5874226, "node_id": "MDQ6VXNlcjU4NzQyMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5874226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woshizouguo", "html_url": "https://github.com/woshizouguo", "followers_url": "https://api.github.com/users/woshizouguo/followers", "following_url": "https://api.github.com/users/woshizouguo/following{/other_user}", "gists_url": "https://api.github.com/users/woshizouguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/woshizouguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woshizouguo/subscriptions", "organizations_url": "https://api.github.com/users/woshizouguo/orgs", "repos_url": "https://api.github.com/users/woshizouguo/repos", "events_url": "https://api.github.com/users/woshizouguo/events{/privacy}", "received_events_url": "https://api.github.com/users/woshizouguo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Look like it was fixed in the latest version. The bug exists in Version 4.18", "Hi @woshizouguo 👋 Glad to hear it is fixed in the most recent versions! Feel free to reopen this issue if you believe you have further queries (related to the latest version, as we can't change the past :) )" ]
1,665
1,665
1,665
NONE
null
### System Info In the generator, the final [beam_indices](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/generation_utils.py#L2376) calculated here may not be the `beam_indices` with the best score. For example, assume a generated text ended at time step T, the output length = T, and it has best score in the `beam_hypo` in the end. The beam indices length is T. When in `T+1` step, the `beam_indices` still keeps adding next beam index from TopK, for example, the TopK returns a text with `T+1` length, the beam_indices will add this Top 1 beam idx in `T+1` even though it is not the best score. `beam_indices` length becomes `T+1`. So in the end, the `beam_indices` represents a longer sequence T+1, but the `generated_outputs.sequence` is a short sequence T with best score, and its beam indices are not stored in `beam_hypo` ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The logic is shown in the code. ### Expected behavior `beam_indices` should be consistent with and representing the best sequence.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19591/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19590/comments
https://api.github.com/repos/huggingface/transformers/issues/19590/events
https://github.com/huggingface/transformers/pull/19590
1,408,213,413
PR_kwDOCUB6oc5AwgD0
19,590
Allow usage of TF Text BertTokenizer on TFBertTokenizer to make it servable on TF Serving
{ "login": "piEsposito", "id": 47679710, "node_id": "MDQ6VXNlcjQ3Njc5NzEw", "avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piEsposito", "html_url": "https://github.com/piEsposito", "followers_url": "https://api.github.com/users/piEsposito/followers", "following_url": "https://api.github.com/users/piEsposito/following{/other_user}", "gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}", "starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions", "organizations_url": "https://api.github.com/users/piEsposito/orgs", "repos_url": "https://api.github.com/users/piEsposito/repos", "events_url": "https://api.github.com/users/piEsposito/events{/privacy}", "received_events_url": "https://api.github.com/users/piEsposito/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @Rocketknight1 @gante ", "@gante, do I need to change anything els before it is possible to merge this PR?", "@piEsposito It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nOther than that, we need a review from @Rocketknight1 -- then we are good to merge :)", "@gante @Rocketknight1 thank you for your you feedback, the tests just passed, so I think we are good to go.\r\n\r\nLet's hope TF Serving solves this on their end on the future too, but even if they do, they will only do it for TF >= 2.9, so to keep it servable on previous TF Serving versions, we will need the non-fast TF BertTokenizer anyway. \r\n\r\nThanks!", "Merged!" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? Fixes #19528. This PR introduces a flag that lets you use `tensorflow_text` `BertTokenizer` rather than `FastBertTokenizer`. This is important because as per https://github.com/tensorflow/serving/issues/2064 TF Serving does no support the `FastBertTokenizer` operations, so despite having the tokenizer in-graph, the model would not be servable in TF Serving ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19590", "html_url": "https://github.com/huggingface/transformers/pull/19590", "diff_url": "https://github.com/huggingface/transformers/pull/19590.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19590.patch", "merged_at": 1665757083000 }
https://api.github.com/repos/huggingface/transformers/issues/19589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19589/comments
https://api.github.com/repos/huggingface/transformers/issues/19589/events
https://github.com/huggingface/transformers/pull/19589
1,408,183,327
PR_kwDOCUB6oc5AwZlu
19,589
[Doctests] add `configuration_blenderbot_small.py`
{ "login": "grgkaran03", "id": 95518516, "node_id": "U_kgDOBbF_NA", "avatar_url": "https://avatars.githubusercontent.com/u/95518516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grgkaran03", "html_url": "https://github.com/grgkaran03", "followers_url": "https://api.github.com/users/grgkaran03/followers", "following_url": "https://api.github.com/users/grgkaran03/following{/other_user}", "gists_url": "https://api.github.com/users/grgkaran03/gists{/gist_id}", "starred_url": "https://api.github.com/users/grgkaran03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/grgkaran03/subscriptions", "organizations_url": "https://api.github.com/users/grgkaran03/orgs", "repos_url": "https://api.github.com/users/grgkaran03/repos", "events_url": "https://api.github.com/users/grgkaran03/events{/privacy}", "received_events_url": "https://api.github.com/users/grgkaran03/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ydshieh how can I rectify the failing check? Did not occur before in any other PRs.", "_The documentation is not available anymore as the PR was closed or merged._", "> @ydshieh how can I rectify the failing check? Did not occur before in any other PRs.\r\n\r\nThat test is somehow flaky. I re-ran it and now it pass." ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? add `configuration_blenderbot_small.py` for doctests, addressing issue #19487. Please review @ydshieh. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19589/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19589", "html_url": "https://github.com/huggingface/transformers/pull/19589", "diff_url": "https://github.com/huggingface/transformers/pull/19589.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19589.patch", "merged_at": 1665733349000 }
https://api.github.com/repos/huggingface/transformers/issues/19588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19588/comments
https://api.github.com/repos/huggingface/transformers/issues/19588/events
https://github.com/huggingface/transformers/issues/19588
1,408,111,867
I_kwDOCUB6oc5T7hT7
19,588
Jax/Flax pretraining of wav2vec2
{ "login": "aapot", "id": 19529125, "node_id": "MDQ6VXNlcjE5NTI5MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aapot", "html_url": "https://github.com/aapot", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "organizations_url": "https://api.github.com/users/aapot/orgs", "repos_url": "https://api.github.com/users/aapot/repos", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "received_events_url": "https://api.github.com/users/aapot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @aapot! Cool to see you're trying out pre-training of Wav2Vec2 in Flax on Finnish 🇫🇮\r\n\r\nIndeed, the script is under 'research projects' as it remains unsolved. Pre-training of Wav2Vec2 models is notoriously difficult due to issues with stability, giving rise to the phenomena you've experienced such as code vector collapse and unstable contrastive loss. AFAIK there isn't a working implementation for training Transformers W2V2 models in Flax, which makes it an interesting topic to pursue!\r\n\r\nYou've done a great job at digging through issues and PRs to find the aforementioned points! Both the points you've raised look to be missing from the Flax Wav2Vec2 script. Did you try gradient scaling in your experiments?\r\n\r\nOne thing we can try is running the PyTorch and Flax scripts step-by-step in parallel and inspecting where they diverge. We can do this with a tiny dummy model (`hf-internal-testing/tiny-random-wav2vec2` for instance) to make it fast to debug and the same training inputs. When we've identified a divergence between the two we can fix the Flax script by porting the corresponding PyTorch code. LMK if you'd be interested in doing this and I can provide further pointers!", "Just for reference, I never got Wav2Vec2 to work in JAX, but it should def be possible (didn't spent too much time on it) ", "@sanchit-gandhi yep this would be interesting to get working! Yes, I also tried gradient scaling like it was implemented in the PyTorch pretrain script (basically multiply gradients with (num devices / total samples)) without luck.\r\n\r\nI'd be interested in putting some time into fixing this so feel free to provide further pointers. Training of these ASR and NLP models for Finnish is a free time hobby project with @R4ZZ3 so cannot promise anything yet but let's get this fixed 🤗 ", "Awesome @aapot, that's great to hear!\r\n\r\nEssentially what you want to do is run the PyTorch script and Flax script with identical args (for the model, data and training args). In doing this, the PyTorch and Flax models should receive identical inputs, and thus should compute identical losses if the training scripts are the same.\r\n\r\nWhat you want to then do is compare the outputs of the PT and Flax training scripts after each step of pre-training:\r\n1. First check that the data collators are identical by inspecting the returned elements of the `batch` (\"input_values\", \"attention_mask\", \"mask_time_indices\") -> we need to make sure the inputs to the models are the same before we can assess the model outputs\r\n2. Check Gumbel temp is the same\r\n3. Check outputs of the models are the same (projected_quantized_states, projected_states, codevector_perplexity)\r\n4. Check contrastive loss is the same\r\n5. Check diversity loss is the same -> once all the losses match then we can move onto making sure the gradients and updates are the same (easier to verify, and very much likely to be the case if the losses are the same)\r\n\r\nIt's likely the bug in the Flax script lies in 3, 4 or 5! Once you identify where the losses deviate, you can dig deeper into the code for PT and Flax and try to find the line(s) of code where the functionality is different.\r\n\r\nHow you debug this is up to you. To make this quick and easy, I'd recommend using a dummy model (`hf-internal-testing/tiny-random-wav2vec2`) and a dummy dataset (`hf-internal-testing/librispeech_asr_dummy`) -> in total this is about 10MB of downloaded data and the script should run very fast. I'd also first run training on CPU only for both PT and Flax, such that the number of devices are fixed equal to one (no gradient scaling effects).\r\n\r\nFor comparing the outputs, you can either run the scripts side-by-side and print intermediate values, or combine them into a single notebook and print cell outputs after each step (I can give you a template for this if you want to use a notebook). Print statements are easy to use, but don't provide much detail other than numeric values. What I'd do is first add print statements for each of the items listed in 1-5 to quickly see which values match ✅ and which values don't ❌. After that you can go deeper with either: more print statements, breakpoints (ipdb), or a debugger.\r\n\r\nIt might take a bit of time to establish a good set-up for debugging quickly, but once you've got this set-up it should be a case of finding where the losses are different and then fixing for Flax! You might also need to disable shuffling of the training dataset to make sure the training inputs are passed in the same way to PT as Flax.\r\n\r\nThese should make for good starting points (haven't tried them, but they're similar to the configs I use for debugging ASR fine-tuning):\r\n\r\nPT\r\n```\r\npython run_wav2vec2_pretraining.py \\\r\n\t--dataset_name=\"hf-internal-testing/librispeech_asr_dummy\" \\\r\n\t--dataset_config_names=\"clean\" \\\r\n\t--train_split_name=\"validation\" \\\r\n\t--model_name_or_path=\"hf-internal-testing/tiny-random-wav2vec2\" \\\r\n\t--output_dir=\"./\" \\\r\n\t--max_train_steps=\"10\" \\\r\n\t--num_warmup_steps=\"2\" \\\r\n\t--learning_rate=\"0.005\" \\\r\n\t--logging_steps=\"1\" \\\r\n\t--save_strategy=\"no\" \\\r\n\t--per_device_train_batch_size=\"8\" \\\r\n --do_train\r\n```\r\n\r\nFlax\r\n```\r\nJAX_PLATFORM_NAME=cpu python run_wav2vec2_pretrain_flax.py \\\r\n\t--dataset_name=\"hf-internal-testing/librispeech_asr_dummy\" \\\r\n\t--dataset_config_names=\"clean\" \\\r\n\t--train_split_name=\"validation\" \\\r\n\t--model_name_or_path=\"hf-internal-testing/tiny-random-wav2vec2\" \\\r\n\t--output_dir=\"./\" \\\r\n\t--max_train_steps=\"10\" \\\r\n\t--num_warmup_steps=\"2\" \\\r\n\t--learning_rate=\"0.005\" \\\r\n\t--logging_steps=\"1\" \\\r\n\t--save_strategy=\"no\" \\\r\n\t--per_device_train_batch_size=\"8\" \\\r\n --do_train\r\n```", "Thanks for those pointers @sanchit-gandhi, sounds reasonable! I'll start digging into this soon, will keep you updated here.", "Hi,\r\n\r\n> For me, it looks like the codevector_perplexity will always collapse to value of 2 and stay there which I believe is not a good thing. Also, the constrastive loss is usually very unstable.\r\n\r\nI tried pre-training the jax wav2vec2 model on my own data and I came across similar problems. Tried with multiple huge chunks of my own dataset and the perplexity always collapsed to 2 while the loss fluctuated a lot. I also noticed that across all my datasets the eval loss was always 0.09969.\r\n\r\nSo, if I finetune this pretrained model, will it give any good results?\r\n\r\nAlso do you guys have any code to fine-tune this pretrained model that I can use?", "For finetuning we have used these resources as base:\r\nhttps://huggingface.co/blog/fine-tune-wav2vec2-english\r\nhttps://huggingface.co/blog/wav2vec2-with-ngram\r\n\r\nAlso we are trying out going to try out these. We just need to fix some of our datasets before that as we have lover case material. Luckily we have trained T5 model for casing + punctuation correction. https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#sequence-to-sequence", "Hey @Aaryan369! Thanks for sharing your experience - it seems like there's an inherent bug in the JAX pre-training implementation with how the loss terms are computed leading to code vector perplexity collapse and unstable loss.\r\n\r\nYou can certainly try fine-tuning a pre-trained Wav2Vec2 model. If your fine-tuning data is in-domain with the pre-training you can expect good results with very little data - as little as 10 minutes as shown by the Wav2Vec2 paper! \r\n\r\nIf your fine-tuning data is more out-of-domain with the pre-training data, you can expect to require much more data to achieve good results. This is really on a case-by-case basis, so you'll have to make that decision based on what you know about your fine-tuning situation!\r\n\r\nIn terms of pre-trained models, there are English-only checkpoints:\r\n- [base](https://huggingface.co/facebook/wav2vec2-base)\r\n- [large](https://huggingface.co/facebook/wav2vec2-large)\r\n- [large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60)\r\n\r\nAnd multilingual ones (https://huggingface.co/facebook/wav2vec2-large-xlsr-53 for example). The English-only ones will fare better for English speech tasks, and the multilingual ones for most others.\r\n\r\nThe resources @R4ZZ3 has kindly linked are perfect for fine-tuning in PyTorch. If you want to fine-tune in JAX, I'd advise you to try: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/run_flax_speech_recognition_ctc.py This script closely resembles the PyTorch one in Transformers: https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition\r\n\r\nIt's on my list to add this JAX CTC fine-tuning script to Transformers over the coming weeks!", "We are compiling a large speech corpus in Norwegian (100k+ hours). We expect it to be ready in roughly a month. Our plan is to pretrain a Wav2Vec2. We have access to TPUs through TRC and ideally we would like to train this in Flax instead of XLA/PT.\r\n\r\nThis is a high priority project for us, and I am happy to assist in both testing and debugging here. ", "100k is mega! Very excited to see how pre-training JAX Wav2Vec2 in Finnish goes with this much data. Just out of interest, are you set on producing a pre-trained checkpoint in Finnish? Or is the end goal downstream ASR? The multilingual [Whisper](https://huggingface.co/models?search=whisper) models are pre-trained on 1066h of **labelled** Finnish audio-transcription data (out of 670,000h total). They get good results with zero-shot transfer learning (i.e. no fine-tuning) on Finnish Common Voice 9 (17.0% WER) and Finnish VoxPopuli (15.5% WER), _c.f._ Tables 11 and 12 from the [Whisper paper](https://cdn.openai.com/papers/whisper.pdf). You could definitely improve upon these results with fine-tuning! Might be a faster route to a performant, downstream Finnish ASR model than Wav2Vec2 pre-training + fine-tuning?", "Just a quick update so I finally had time to start the actual debugging. More info to follow soon.", "Great! Keep us posted!", "Alright, here are some findings so far:\r\nStep 1: `mask_time_indices` and `sampled_negative_indices` were not same with the PT implementation. Fixed that by pretty much just copying functions for those from PT to Flax.\r\nStep 2: PT and Flax gumbel decay was using different step number, fixed that by deducting Flax step number by one for gumbel decaying. After that, gumbel temp seemed to remain same for about the first 5 steps, after that it started deviating tiny bit between Flax and PT which was weird. Although this probably is not our biggest problem at the moment.\r\nStep 3: Comparing model outputs seemed bit hard, I guess because model weights are initialized differently at random? \r\n\r\nI have found couple differences between Flax and PT model code so far:\r\n1. Flax was missing layerdrop functionality, fixed that.\r\n2. Flax and PT gumbel softmax was implemented differently. PT version uses `hard=True` option with `torch.nn.functional.gumbel_softmax` which results in returned samples as discretized one-hot vectors. Flax gumbel softmax implementation returns soft samples. I tried implement the `hard` option to Flax by copying it from [PT code](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#gumbel_softmax). My current implementation looks like this:\r\n```python\r\ny_soft = nn.softmax((hidden_states + gumbels) / temperature)\r\nindex = y_soft.argmax(axis=-1)\r\ny_hard = jnp.zeros_like(hidden_states).at[jnp.arange(len(hidden_states)), index].set(1.0)\r\ncodevector_probs = y_hard - y_soft + y_soft\r\n```\r\nwhen the [PT code](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#gumbel_softmax) looks like this:\r\n````python\r\ny_soft = gumbels.softmax(dim)\r\nindex = y_soft.max(dim, keepdim=True)[1]\r\ny_hard = torch.zeros_like(logits, memory_format=torch.legacy_contiguous_format).scatter_(dim, index, 1.0)\r\nret = y_hard - y_soft.detach() + y_soft\r\n````\r\nAt first, I also had the PT's `y_soft.detach()` implemented as `codevector_probs = y_hard - jax.lax.stop_gradient(y_soft) + y_soft` but I noticed it seemed to make the codevector collapse again. Without it, based on some testing the Flax codevector doesn't seem to collapse anymore (the `codevector_perplexity` is raising and staying on high level, not collapsing close to zero as originally). Although the model doesn't still seem to learn properly so I bet there still are more to investigate and fix. It also could be that my Flax gumbel softmax `hard` option is not yet implemented correctly.\r\n\r\nIn addition, I have made initial updates (some more smaller updates could still be made) to the `run_wav2vec2_pretrain_flax.py` script to make it more up to date and comparable to the PT `run_wav2vec2_pretraining_no_trainer.py` script. My updates are available here on my fork and branch: https://github.com/aapot/transformers/tree/w2v2-jax-flax-pretrain", "Continuing with the updates:\r\nStep 4. contrastive loss calculation is same with Flax and PT\r\nStep 5. diversity loss calculation looks to be same but I'll verify that later", "Really great work, @aapot. I do however understand that there are still some issues here (since the contrastive loss starts to increase after a while), and that the issue most likely is related to the Flax gumbel implementation. Any chance that anyone at 🤗 can take a look at that? What do you think @sanchit-gandhi @patrickvonplaten ?\r\n\r\nWhen this is done Ill be glad to contribute with larger training, and finetuning/testing on downstream tasks.", "> Comparing model outputs seemed bit hard, I guess because model weights are initialized differently at random?\r\n\r\nYou could hack into the code and load pre-trained weights! I'd recommend the checkpoint at https://huggingface.co/hf-internal-testing/tiny-random-wav2vec2\r\nPyTorch:\r\n```python\r\nfrom transformers import Wav2Vec2ForPreTraining\r\n\r\nmodel = Wav2Vec2ForPreTraining.from_pretrained(\"hf-internal-testing/tiny-random-wav2vec2\")\r\n```\r\nJAX:\r\n```python\r\nfrom transformers import FlaxWav2Vec2ForPreTraining\r\n\r\nmodel = FlaxWav2Vec2ForPreTraining.from_pretrained(\"hf-internal-testing/tiny-random-wav2vec2\", from_pt=True)\r\n```\r\n\r\n=> this will initialise the models with the same weights!\r\n\r\nFrom the PyTorch code, it seems as though we should break `y_soft` from the computation graph in the `codevector_probs` calculation. Maybe worth quickly double checking what they do in fairseq here as well? `jax.lax.stop_gradient` can be a bit fiddly but I think it's the best option for stoping the backprop for a variable.\r\n\r\nSounds like you're making good progress @aapot! Keep us posted with updates and questions, happy to help!", "Oh one more question! You're running both on CPU right? JAX will definitely diverge from PT on GPU/TPU due to differences in the matmul precision (_c.f._ https://github.com/google/jax/issues/10413#issue-1212211265)", "Thanks for the tips @sanchit-gandhi!\r\n\r\nActually I also had in mind to use pre-trained weights to compare model outputs that way, will try it soon.\r\n\r\nWill also check fairseq implementation if that could reveal more stuff to fix.\r\n\r\nYup, I am running both Jax and PT on my local laptop with CPU when debugging.\r\n", "Okay great! Tiny pre-trained models on CPU is the way to go here!", "After using pre-trained weights to continue pretraining for one more step with same input data, I think following is happening with model outputs:\r\n- `projected_states` has max difference of 0.276 (abs of Flax and PT matrices deducted from each other and max value of the deducted matrix)\r\n- `projected_quantized_states` has max difference of 0.588\r\n- `codevector_perplexity` is same\r\n\r\n`projected_quantized_states` difference is due to the `GumbelVectorQuantizer` because its input `extract_features` from the `wav2vec2` module is actually matching for Flax and PT. Maybe the difference happening in `GumbelVectorQuantizer` is because of randomized gumbel sampling?\r\n\r\nIn addition, I checked the fairseq gumbel softmax implementation and they are also using the PyTorch's `torch.nn.functional.gumbel_softmax` with the `hard=True` option. I am starting to think the main problem could be in this gumbel softmax implementation in Flax.\r\n\r\nIf someone could verify if using `codevector_probs = y_hard - jax.lax.stop_gradient(y_soft) + y_soft` version will make the codevector to collapse (perplexity) that would be great. For me, I think using `codevector_probs = y_hard - y_soft + y_soft` won't make it collapse but not sure if that's the correct approach either for implementing the gumbel softmax in Flax. For example, with the local Flax VS PT testing with PT the codevector perplexity starts to rise from ~100 to ~400 over 5 epochs of pretraining from scratch. With Flax without using `jax.lax.stop_gradient` the perplexity rises very similarly. But if I use `jax.lax.stop_gradient` the perplexity rises only to ~250. Sometime ago I tried test the same with real base-sized w2v2 Flax model to pretrain with Finnish data and with `jax.lax.stop_gradient` the codevector perplexity seemed to collapse totally quite early at the training.", "Fantastic work @aapot!\r\n\r\nI noticed the following comment in the pull from @patrickvonplaten to @ThomAub:\r\n_\"PyTorch module to Flax? This might be a bit difficult and require some googling to see if others have already implement gumbel softmax in jax/Flax or not. If you could take a look at this, it would be very useful!\"_ ([https://github.com/huggingface/transformers/pull/12271#issuecomment-867793046](https://github.com/huggingface/transformers/pull/12271#issuecomment-867793046)).\r\n\r\nMay there be issues here?", "> abs of Flax and PT matrices deducted from each other and max value of the deducted matrix\r\n\r\nThis is exactly the way we want to compute differences between PT and Flax in the projected states space 👌 For reference, a matching implementation should have a max abs difference of 1e-5.\r\n\r\n> Maybe the difference happening in GumbelVectorQuantizer is because of randomized gumbel sampling?\r\n\r\nThis seems logical! What I would do is dive into the GumbelVectorQuantizer and check the intermediate variables up to where the randomised sampling is performed. If they match up until the sampling that's a good sign. Forcing sampling between PT and Flax to be the same is a bit tricky... IMO we have two options:\r\n1. Pre-define a sequence of 'pseudo-random' matrices. Hard code these in PT and Flax (e.g. 3 matrices of the correct dimension, pre-defined elements, with the same elements used in PT and Flax). Replace the sampled matrix with one of our pre-defined matrices in the GumbelVectorQuantizer at each training step: this ensures the matrices are the same in PT and Flax.\r\n2. Temporarily use the PT implementation of the randomised Gumbel sampling in the Flax script such that the same seed is used and thus the same pseudo-random numbers. Will requires sampling a PyTorch tensor and then converting back to a jnp array.\r\n\r\nUnfortunately, both of these methods are a bit hacky. The first might be easier IMO - you don't have to define it to be anything too crazy, and just 2 or 3 different matrices would do (we just need to verify the outputs are the same over 2 or 3 training steps).\r\n\r\n> I am starting to think the main problem could be in this gumbel softmax implementation in Flax.\r\n\r\nSounds like we're narrowing down!\r\n\r\nMaybe we can try forcing the same Gumbel quantiser outputs and then experiment with / without stop gradient. The fact that fairseq and HF PT use `y_hard` suggests we should use stop gradient! \r\n", "Good point @peregilk - worth having a look to see if there are any OSS implementations of the Gumbel ops in JAX/Flax online! (as far as I'm aware there's not, but might be wrong!)", "Please keep this issue open. It is still activity going on for solving this issue.", "Hope the analysis is going ok @aapot, think you're doing a great job here! Feel free to share any updates / ask questions, more than happy to help!", "Hi @sanchit-gandhi, unfortunately I have been very busy the past month so haven't had time to investigate more about this jax gumbel quantizer. Now that the recent Hugging Face Whisper finetuning event is over (where I participated too), I'll get back to debugging this wav2vec2 pretraining after a short Christmas break :) In any case, I am planning to create PR of my current work even if the Gumbel quantizer would not get fixed because my current branch has pretty much updated the Wav2vec2 flax model and pretraining code implemetation up to date with the Pytorch version. But I hope we get the Gumbel part fixed too.", "Hey @aapot! Hope you had a nice Xmas break and that you enjoyed the Whisper event 🙂 \r\n\r\nThanks for the update! Sounds good regarding opening a PR with the current changes - these are certainly welcome fixes! We can iterate on the PR to see if we can get the Gumbel part fixed too. Feel free to ping me here or on the new PR with questions / queries - more than happy to help and excited to see this one to completion!", "@sanchit-gandhi quick update on the GumbelVectorQuantizer with the option 1 you mentioned earlier (replace gumbel sampled matrix with predefined matrix).\r\nFirst, I checked that `hidden_states` inside GumbelVectorQuantizer just before the actual gumbel sampling had diff of `5.7e-07` between Flax and PT for the first training step so that looks good.\r\nNext, I saved matrices of PT `nn.functional.gumbel_softmax` for the first three steps and then used them inside Flax GumbelVectorQuantizer for the first three steps. By doing that, model output's `projected_quantized_states` were actually the same between PT and Flax for the first training step (diff 0).\r\nBut for the second step, the `projected_quantized_states` diff already jumped to 0.3 (although the diff before the linear projection layer was 0.01 so the linear projection adds some diff to the `projected_quantized_states` output. For the second step, `hidden_states` also had diff of 0.38 inside GumbelVectorQuantizer.\r\nFor the third step diverging continues by having diff of 0.47 for `projected_quantized_states` (0.02 before linear projection), and diff of 0.43 for the `hidden_states` inside GumbelVectorQuantizer.\r\n\r\nAny ideas how to proceed?", "Hey @aapot!\r\n\r\nThanks for the update - really cool to see the progress you're making here! Sounds like you've got a nice system going for debugging and comparing the PT-FX outputs!\r\n\r\nThat's great the `hidden_states` are equivalent before the Gumbel sampling ✅ And good to see that the `codevectors` had a diff of 0 - exactly what we wanted by forcing the sampled matrix! Was the `codevector_perplexity` also equivalent in this case?\r\n\r\nFrom the last experiment, it sounds pretty likely that the nn.Module's are equivalent now between PT and Flax (we're getting the same tensors out when we override the Gumbel sampling step).\r\n\r\nI would suggest we quickly verify that all the loss terms are equivalent with this non-deterministic set-up. If they match for the first training step, that's perfect, it means we should have a closely matching implementation.\r\n\r\nNote that with our 'forced sampling' method, we can verify that we get the same losses between PT and Flax, but since we change how the code vectors are computed in Flax (by forcing the sampled Gumbel matrix) we can't expect the gradients to be correct - forcing the Gumbel sampling is going to mess-up the backprop, so anything after the first parameter update is going to be divergent.\r\n\r\nSo once we've verified that all the loss terms are the same (contrastive, diversity, total loss), I would re-instate stochastic sampling of the Gumbel matrix in Flax and see whether we can train a stable system!\r\n\r\nHow does that sound?" ]
1,665
1,683
1,683
NONE
null
There is Jax/Flax based script available for pretraining wav2vec2 [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects/wav2vec2). I have been trying to pretrain new wav2vec2 model for Finnish on TPUs using that script but it seems impossible to get the model train properly. I know the script is under _research_projects_ so I am wondering if anyone has been able to succesfully pretrain wav2vec2 models with it? Or if anyone has made own updates to the script to fix potential problems? For me, it looks like the _codevector_perplexity_ will always collapse to value of 2 and stay there which I believe is not a good thing. Also, the constrastive loss is usually very unstable. I attached the image below showcasing those issues. In addition, I have tried pretraining the wav2vec2 with the official fairseq implementation where the training looks to be working fine without those issues. So I believe HF Jax/Flax implementation is broken somehow. ![image](https://user-images.githubusercontent.com/19529125/195653322-b650b89b-310b-4a89-84ee-425674810893.png) Also, I think the HF Jax/Flax wav2vec2 implementation is not fully on par with the HF PyTorch wav2vec2 implementation. For example, I noticed this comment by @patrickvonplaten https://github.com/huggingface/transformers/issues/14471#issuecomment-982077705 and I think the comment's point number 1 is not implemented in the Jax/Flax version. Also, on Pytorch wav2vec2 pretraining PR comment https://github.com/huggingface/transformers/pull/13877#discussion_r723197919 gradient scaling is implemented to avoid issues on multiple devices training. I wonder if same would be needed for Jax/Flax script when training on 8 TPU cores? I tried implementing those myself but then I found this script where @patrickvonplaten seemed to have already implemented the first point number 1: https://huggingface.co/patrickvonplaten/wav2vec2-german-flax/blob/main/run_wav2vec2_pretrain_flax.py Anyhow, even with those potential fixes I haven't been able to get the training work properly. That's really pity since the Jax/Flax training would be really great when using TPUs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19588/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19587/comments
https://api.github.com/repos/huggingface/transformers/issues/19587/events
https://github.com/huggingface/transformers/pull/19587
1,408,096,375
PR_kwDOCUB6oc5AwGyd
19,587
TF port of ESM
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Pipeline tests are failing because the model has no SEP token and doesn't work with multiple sequences. Working on it!", "There's one final test remaining that's failing because of some arcane issue in the code that generates data batches for the pipeline. I'm trying to figure it out!", "Tests are green, and #19124 has been merged! Going to use it to upload the remaining checkpoints and then merge this." ]
1,665
1,666
1,666
MEMBER
null
Working out the last few issues now! Models <3B parameters have been ported already, larger models will need to wait for #19124. This PR also includes fixes for a couple of issues in the original PyTorch ESM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19587", "html_url": "https://github.com/huggingface/transformers/pull/19587", "diff_url": "https://github.com/huggingface/transformers/pull/19587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19587.patch", "merged_at": 1666012577000 }
https://api.github.com/repos/huggingface/transformers/issues/19586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19586/comments
https://api.github.com/repos/huggingface/transformers/issues/19586/events
https://github.com/huggingface/transformers/pull/19586
1,408,070,090
PR_kwDOCUB6oc5AwBJN
19,586
[Doctest] Add configuration_trajectory_transformer.py
{ "login": "SD-13", "id": 89520981, "node_id": "MDQ6VXNlcjg5NTIwOTgx", "avatar_url": "https://avatars.githubusercontent.com/u/89520981?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SD-13", "html_url": "https://github.com/SD-13", "followers_url": "https://api.github.com/users/SD-13/followers", "following_url": "https://api.github.com/users/SD-13/following{/other_user}", "gists_url": "https://api.github.com/users/SD-13/gists{/gist_id}", "starred_url": "https://api.github.com/users/SD-13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SD-13/subscriptions", "organizations_url": "https://api.github.com/users/SD-13/orgs", "repos_url": "https://api.github.com/users/SD-13/repos", "events_url": "https://api.github.com/users/SD-13/events{/privacy}", "received_events_url": "https://api.github.com/users/SD-13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @ydshieh PTAL. Thanks," ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. Adds configuration_trajectory_transformer.py to Doc tests. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19586", "html_url": "https://github.com/huggingface/transformers/pull/19586", "diff_url": "https://github.com/huggingface/transformers/pull/19586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19586.patch", "merged_at": 1665680830000 }
https://api.github.com/repos/huggingface/transformers/issues/19585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19585/comments
https://api.github.com/repos/huggingface/transformers/issues/19585/events
https://github.com/huggingface/transformers/pull/19585
1,408,054,043
PR_kwDOCUB6oc5Av9oE
19,585
Re enable Nightly CI for upcoming PyTorch 1.13
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry, I forgot one docker image. Change PR to draft.", "@LysandreJik I think we don't need to merge this PR. I could just build images and run the tests." ]
1,665
1,679
1,665
COLLABORATOR
null
# What does this PR do? Re enable Nightly CI for upcoming PyTorch 1.13. This is the minimal changes. We might need to check if the docker image could be built with these versions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19585/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19585", "html_url": "https://github.com/huggingface/transformers/pull/19585", "diff_url": "https://github.com/huggingface/transformers/pull/19585.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19585.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/19584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19584/comments
https://api.github.com/repos/huggingface/transformers/issues/19584/events
https://github.com/huggingface/transformers/pull/19584
1,408,052,227
PR_kwDOCUB6oc5Av9Ph
19,584
A few CI fixes for DocumentQuestionAnsweringPipeline
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have not yet updated these tests:\r\n\r\n```\r\nFAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_chunk - AssertionError: Lists differ: [{'score': 0.9974, 'answer': '1110212019', 'start': 23, [69 chars] 16}] != [{'score': 0.9967, 'answer': '1102/2019', 'start': 22, '[67 chars] 15}]\r\nFAILED tests/pipelines/test_pipelines_document_question_answering.py::DocumentQuestionAnsweringPipelineTests::test_large_model_pt_layoutlm_chunk - AssertionError: Lists differ: [{'sc[39 chars]t': 16, 'end': 16}, {'score': 0.9998, 'answer'[31 chars] 16}] != [{'sc[39 chars]t': 15, 'end': 15}, {'score': 0.9924, 'answer'[3...\r\n```\r\n\r\nas I want to inspect the CI failures first. I think both of these tests are caused by tesseract OCR errors (specifically I think the CI is running a diff. version of tesseract than my local machine).", "This is what we have `tesseract-ocr` on our CI runners\r\n\r\n```bash\r\nroot@6aec4d26d7ac:/transformers# apt-show-versions tesseract-ocr\r\nbash: apt-show-versions: command not found\r\nroot@6aec4d26d7ac:/transformers# apt show tesseract-ocr\r\nPackage: tesseract-ocr\r\nVersion: 4.1.1-2build2\r\nPriority: optional\r\nSection: universe/graphics\r\nSource: tesseract\r\nOrigin: Ubuntu\r\nMaintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>\r\nOriginal-Maintainer: Alexander Pozdnyakov <almipo@mail.ru>\r\nBugs: https://bugs.launchpad.net/ubuntu/+filebug\r\nInstalled-Size: 1573 kB\r\nDepends: libarchive13 (>= 3.2.1), libc6 (>= 2.29), libcairo2 (>= 1.2.4), libfontconfig1 (>= 2.12.6), libgcc-s1 (>= 3.0), libglib2.0-0 (>= 2.12.0), libicu66 (>= 66.1~rc-1~), liblept5 (>= 1.75.3), libpango-1.0-0 (>= 1.37.2), libpangocairo-1.0-0 (>= 1.22.0), libpangoft2-1.0-0 (>= 1.14.0), libstdc++6 (>= 5.2), libtesseract4 (= 4.1.1-2build2), tesseract-ocr-eng (>= 4.00~), tesseract-ocr-osd (>= 4.00~)\r\nReplaces: tesseract-ocr-data\r\nHomepage: https://github.com/tesseract-ocr/\r\nDownload-Size: 262 kB\r\nAPT-Manual-Installed: yes\r\nAPT-Sources: http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages\r\n```", "_The documentation is not available anymore as the PR was closed or merged._", "Thank you! Yes, I filed #347 at some point about this. Unfortunately it looks like a different (potentially flaky) test failure occurred in this run:\r\n\r\n```\r\n run_command(self._launch_args + testargs)\r\n result = get_results(tmp_dir)\r\n # Because we use --version_2_with_negative the testing script uses SQuAD v2 metrics.\r\n> self.assertGreaterEqual(result[\"eval_f1\"], 28)\r\nE AssertionError: 21.428571428571427 not greater than or equal to 28\r\n\r\nexamples/pytorch/test_accelerate_examples.py:201: AssertionError\r\n```\r\n\r\nand so it did not repro the error. I'm going to spin up a VM or docker container that has tesseract 4, and then update the remaining tests there.", "Thank you a lot @ankrgyl! If it's easier, we can get the new expected values from our runners, and see if it will pass with that in multiple runs. Let me know what you prefer :-)", "Oh that is definitely easier. Could you help with that, or show me how to get those values?", "I usually get values from report, say [here](https://github.com/huggingface/transformers/actions/runs/3231701081/jobs/5291537526) or its raw log version. Sometimes I need to run the tests inside the CI runner.\r\n\r\nI will do that tomorrow and check if I can get those updated tests pass in a consistent way.", "Okay sounds great!", "Hi @ankrgyl I am not able to push to your PR branch (you don't give us the permission I think 😢 )\r\n\r\nCould you check [this branch](https://github.com/ydshieh/transformers/commit/33fff18d421187045197f1bbfcc6a4ed72cebe3c) and see if the new values work well on your side too 🙏 ?\r\n\r\n(I don't pay attention to the style, so you will need to re-style it before we can merge)", "Hi @ydshieh the changes look good. I _think_ I just gave you write access to our fork (which should give you write permissions to the branch?). Would you mind checking if that worked?", "Yes, I pushed, with the correct style.\r\n\r\n", "With the latest commit, all `DocumentQuestionAnsweringPipelineTests` pass now. I can have a super happy weekend now.", "Excellent! Let me know if there is anything else I can do to help.", "@Narsil The tests with updated expected values in this PR are recently added in #19204. I would say this is just the environment difference (which gave the different values when @ankrgyl worked on #19204)", "> I would say this is just the environment difference \r\n\r\nThis is what we should be careful about :) If the environment provides such different results, which should either fix something so that the values are more consistent, or workaround the flaky dependency :) (If the test becomes bothering to maintain)", "@Narsil the precise reason for the difference is that locally, I have tesseract 5 (the latest stable release), and the test runners have version 4, which produces slightly different OCR results. I filed https://github.com/huggingface/transformers/pull/347 some time ago about installing tesseract 5 in the docker containers used for spaces, which could help resolve the issue.\r\n\r\nIn the meantime, I can think of a few ways to ensure the tests are more consistent/robust:\r\n\r\n- We can write up some instructions about how to update the tests within a docker container that has the same version of tesseract\r\n- We can freeze the tesseract/OCR results into the test so that tesseract is not actually run while evaluating them (small added benefit that tests will run a bit faster)\r\n- We can attempt to write a question and/or use a document where the results are the same b/w tesseract 4 and 5.", "@ankrgyl thanks for the explanation.\r\n\r\nSince this is `transformers` not `tesseract` fixing the version being tested to `4` is OK.\r\nBut if it becomes to hard to make stable, it's always possible to mock it in the tests so that we don't need to wipe up the whole program, and depend on its own instabilities/changes. " ]
1,665
1,666
1,666
CONTRIBUTOR
null
# What does this PR do? Fixes a few issues caught by CI (see [comment](https://github.com/huggingface/transformers/pull/19204#issuecomment-1277106187)). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x ] Did you write any new necessary tests? ## Who can review? @ydshieh @Narsil @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19584/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19584", "html_url": "https://github.com/huggingface/transformers/pull/19584", "diff_url": "https://github.com/huggingface/transformers/pull/19584.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19584.patch", "merged_at": 1666013728000 }
https://api.github.com/repos/huggingface/transformers/issues/19583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19583/comments
https://api.github.com/repos/huggingface/transformers/issues/19583/events
https://github.com/huggingface/transformers/pull/19583
1,408,038,013
PR_kwDOCUB6oc5Av6KQ
19,583
[Doctest] Add configuration_vision_encoder_decoder.py
{ "login": "SD-13", "id": 89520981, "node_id": "MDQ6VXNlcjg5NTIwOTgx", "avatar_url": "https://avatars.githubusercontent.com/u/89520981?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SD-13", "html_url": "https://github.com/SD-13", "followers_url": "https://api.github.com/users/SD-13/followers", "following_url": "https://api.github.com/users/SD-13/following{/other_user}", "gists_url": "https://api.github.com/users/SD-13/gists{/gist_id}", "starred_url": "https://api.github.com/users/SD-13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SD-13/subscriptions", "organizations_url": "https://api.github.com/users/SD-13/orgs", "repos_url": "https://api.github.com/users/SD-13/repos", "events_url": "https://api.github.com/users/SD-13/events{/privacy}", "received_events_url": "https://api.github.com/users/SD-13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,665
1,665
1,665
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of issue https://github.com/huggingface/transformers/issues/19487. Adds `configuration_vision_encoder_decoder.py` to `Doc tests`. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19583/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/19583", "html_url": "https://github.com/huggingface/transformers/pull/19583", "diff_url": "https://github.com/huggingface/transformers/pull/19583.diff", "patch_url": "https://github.com/huggingface/transformers/pull/19583.patch", "merged_at": 1665768615000 }