url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/17473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17473/events
|
https://github.com/huggingface/transformers/issues/17473
| 1,252,587,659
|
I_kwDOCUB6oc5KqPiL
| 17,473
|
Number of channels in the ViTMAE model
|
{
"login": "kacwin",
"id": 19871333,
"node_id": "MDQ6VXNlcjE5ODcxMzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/19871333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kacwin",
"html_url": "https://github.com/kacwin",
"followers_url": "https://api.github.com/users/kacwin/followers",
"following_url": "https://api.github.com/users/kacwin/following{/other_user}",
"gists_url": "https://api.github.com/users/kacwin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kacwin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kacwin/subscriptions",
"organizations_url": "https://api.github.com/users/kacwin/orgs",
"repos_url": "https://api.github.com/users/kacwin/repos",
"events_url": "https://api.github.com/users/kacwin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kacwin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nYes the \"patchify\" and \"unpatchify\" methods would need to be updated to supported 1 channel.\r\n\r\nAre you interested in opening a PR to add support for this? ",
"Hey, \r\nthanks for the fast reply. I am a complete beginner to github (I only used it to manage my personal projects so far) and I will not be getting into the PR-procedure anytime soon. I did not know how to reach out to someone managing the vitmae project and tried out this issue just to point things out. If noone else is having the same problem or noone has time to care for this then this is just how it is I guess.",
"@NielsRogge updating the hard-coded part to the config value will work? \r\n\r\nI can open a PR to help on this"
] | 1,653
| 1,655
| 1,655
|
NONE
| null |
### Feature request
Dear huggingface community,
I am experimenting with the ViTMAE model from the transformers library. The ViTMAEConfig class has the option "num_channels" to specify the number of input (color) channels belonging to an image. If I modify this, say, to 1 (for processing grayscale images), the model throws an error, due to the number "3" being hard-coded into the functions "patchify" and "unpatchify" in the file "modeling_vit_mae.py".
### Motivation
I would like to request to change this such that any number of input channels is possible.
### Your contribution
As noted above, one only has to change the functions "patchify" and "depatchify" to either scan for the input color channels, or provide the input color channels as a class variable such that both functions can use it (instead of the hard-coded value "3"). I checked this and on my system it worked out just fine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17473/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17472/events
|
https://github.com/huggingface/transformers/pull/17472
| 1,252,570,249
|
PR_kwDOCUB6oc44rLeQ
| 17,472
|
Setup for Italian translation and add quicktour.mdx translation
|
{
"login": "mfumanelli",
"id": 53374883,
"node_id": "MDQ6VXNlcjUzMzc0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfumanelli",
"html_url": "https://github.com/mfumanelli",
"followers_url": "https://api.github.com/users/mfumanelli/followers",
"following_url": "https://api.github.com/users/mfumanelli/following{/other_user}",
"gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions",
"organizations_url": "https://api.github.com/users/mfumanelli/orgs",
"repos_url": "https://api.github.com/users/mfumanelli/repos",
"events_url": "https://api.github.com/users/mfumanelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfumanelli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks great! Thanks @mfumanelli for the PR and opening the venue for an Italian 🇮🇹 translation!\r\n\r\nSince this is the first IT translation could please add `it` to the languages section in the `.github/workflows` ([here](https://github.com/huggingface/transformers/blob/main/.github/workflows/build_documentation.yml) and [here](https://github.com/huggingface/transformers/blob/main/.github/workflows/build_pr_documentation.yml))?",
"Thank you @mfumanelli 🇮🇹! LGTM, @sgugger :)"
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
I created the folder for files translated into Italian and made the translation of quicktour.mdx and index.mdx files.
The folder (called 'it') is located in transformers/docs/source/.
The translated documents are located in the transformers/docs/source/it folder.
Fixes [#17459](https://github.com/huggingface/transformers/issues/17459)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
@omarespejel @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17472/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17472",
"html_url": "https://github.com/huggingface/transformers/pull/17472",
"diff_url": "https://github.com/huggingface/transformers/pull/17472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17472.patch",
"merged_at": 1654005464000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17471/events
|
https://github.com/huggingface/transformers/pull/17471
| 1,252,467,233
|
PR_kwDOCUB6oc44q1Yc
| 17,471
|
Fix CI tests hang forever (sometimes)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"From the experimentation, with the argument `--max-worker-restart=0`:\r\n\r\n- all tests passed before the worker crashed --> marked as passed\r\n- the test where the worker crashes --> marked as failed\r\n- all tests not yet run before the worker crashed -> won't be marked at all, neither as passed nor as failed (i.e. won't be sent to other workers)",
"> From the experimentation, with the argument `--max-worker-restart=0`:\r\n> \r\n> * all tests passed before the worker crashed --> marked as passed\r\n> * the test where the worker crashes --> marked as failed\r\n> * all tests not yet run before the worker crashed -> won't be marked at all, neither as passed nor as failed (i.e. won't be sent to other workers)\r\n\r\nI understand we might want to the tests to be run on other workers, but currently without `--max-worker-restart=0`, the CI hangs forever (if a worker crashes), which is not good neither.",
"~~(BTW, it looks like this issue happens in docker, say `circleci/python:3.7`, but not on the GCP VM without docker)~~"
] | 1,653
| 1,662
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Set `--max-worker-restart=0` for `pytest` command in CircieCI workflow file.
Currently, the tests could hang forever if `--dist=loadfile` is specified, and when a worker is crashed, and we finally got `Too long with no output (exceeded 10m0s): context deadline exceeded`. For example, [this job run](https://app.circleci.com/pipelines/github/huggingface/transformers/41092/workflows/9d84d20f-be88-4a20-b17e-07d5437b59f9/jobs/467619).
## Remark:
- I opened [an issue](https://github.com/pytest-dev/pytest-xdist/issues/784#issue-1252433606) in `pytest-xdist` repo.
- The reason that the worker crashed is still unclear yet. It might be related to [memory issue #17470](https://github.com/huggingface/transformers/pull/17470). But a general issue of memory leak in tests also exists.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17471/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17471",
"html_url": "https://github.com/huggingface/transformers/pull/17471",
"diff_url": "https://github.com/huggingface/transformers/pull/17471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17471.patch",
"merged_at": 1654158654000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17470/events
|
https://github.com/huggingface/transformers/pull/17470
| 1,252,380,500
|
PR_kwDOCUB6oc44qi64
| 17,470
|
Fix ViTMAEModelTester
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Current `ViTMAE` test has `(TF)ViTMAEModelTester` not setting `decoder` configs, like `decoder_hidden_size`, `decoder_intermediate_size` etc.
For the model class `(TF)ViTMAEForPretraining`, the default values for these attributes in `ViTMAEConfig` are used, and cause some tests being slow and consuming quite a lot of memory.
This PR fixes it.
(This might be `one` of the reasons where sometimes CI tests get `node down: Not properly terminated` -> `replacing crashed worker gw7` -> hang forever -> timeout)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17470/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17470",
"html_url": "https://github.com/huggingface/transformers/pull/17470",
"diff_url": "https://github.com/huggingface/transformers/pull/17470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17470.patch",
"merged_at": 1654002114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17469/events
|
https://github.com/huggingface/transformers/pull/17469
| 1,252,378,116
|
PR_kwDOCUB6oc44qiaC
| 17,469
|
Add swin transformer v2
|
{
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for constantly guiding me for this model addition and answering all my queries 😊. Looking forward to contribute more in future.\r\n\r\n> Great work! Main comments are:\r\n> \r\n> * you can also add `Copied from` statements to methods, so it can be leveraged to copy forward methods, in case the init method differs.\r\n> * would be great to add the model to the doc tests\r\n\r\n",
"> Thank you for your contribution! Everything looks great and all major issues have already been addressed. I made a few comments on code comments and minor issues, they shouldn't take long and we can merge this PR afterwards :)\r\n\r\nThanks, I have added the changes as you suggested.",
"Thanks again for your contribution!"
] | 1,653
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR will fix #17268
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17469/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17469",
"html_url": "https://github.com/huggingface/transformers/pull/17469",
"diff_url": "https://github.com/huggingface/transformers/pull/17469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17469.patch",
"merged_at": 1658934888000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17468/events
|
https://github.com/huggingface/transformers/issues/17468
| 1,252,050,909
|
I_kwDOCUB6oc5KoMfd
| 17,468
|
Any attempt to export any model to onnx returns an ATOL value of nan.
|
{
"login": "Jcwscience",
"id": 14113132,
"node_id": "MDQ6VXNlcjE0MTEzMTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/14113132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jcwscience",
"html_url": "https://github.com/Jcwscience",
"followers_url": "https://api.github.com/users/Jcwscience/followers",
"following_url": "https://api.github.com/users/Jcwscience/following{/other_user}",
"gists_url": "https://api.github.com/users/Jcwscience/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jcwscience/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jcwscience/subscriptions",
"organizations_url": "https://api.github.com/users/Jcwscience/orgs",
"repos_url": "https://api.github.com/users/Jcwscience/repos",
"events_url": "https://api.github.com/users/Jcwscience/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jcwscience/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
`ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: nan`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17468/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17468/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17467/events
|
https://github.com/huggingface/transformers/issues/17467
| 1,252,022,505
|
I_kwDOCUB6oc5KoFjp
| 17,467
|
Error building docs locally: No matching distribution found for ray[tune]; extra == "docs"
|
{
"login": "venkatasg",
"id": 22871413,
"node_id": "MDQ6VXNlcjIyODcxNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22871413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venkatasg",
"html_url": "https://github.com/venkatasg",
"followers_url": "https://api.github.com/users/venkatasg/followers",
"following_url": "https://api.github.com/users/venkatasg/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatasg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venkatasg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatasg/subscriptions",
"organizations_url": "https://api.github.com/users/venkatasg/orgs",
"repos_url": "https://api.github.com/users/venkatasg/repos",
"events_url": "https://api.github.com/users/venkatasg/events{/privacy}",
"received_events_url": "https://api.github.com/users/venkatasg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Not sure why you're opening an issue here as it seems you'd like `ray` to have a package installable on your end :-).\r\n\r\nI can't guarantee the doc will build without all integrations, but it might work to build the doc without Ray installed.",
"I'm going to close the issue now, since I've found that using `python=3.7` fixed the issue with install `ray`. However, I seem to be running into more problems now. Firstly, it looks like the module `black` is not listed as a dependency, I got an error that `doc_builder` requires `black` to work. However, even after installing `black`, I'm running into a new error:\r\n\r\n```\r\nfish: 'doc-builder build transformers…' terminated by signal SIGILL (Illegal instruction)\r\n```\r\n\r\nI'm not sure what's causing this error -- I might open a new issue if I find that its something related to this repo rather than my setup.\r\n\r\nEDIT: The issue seems to be more general. After installing the dependencies and `doc-builder`, even trying to import the `transformers` package exits with the SIGILL error."
] | 1,653
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.17.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@sgugger @stevhl
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On my system, when I try to install the packages necessary to build documentation using `pip install -e "[.docs]"`, I get the following error:
```
ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "docs" (from transformers[docs]) (from versions: none)
ERROR: No matching distribution found for ray[tune]; extra == "docs"
```
I'm trying to build the documentation locally as I'm a contributor to the [Dash user contributed datasets](https://github.com/Kapeli/Dash-User-Contributions#contribute-a-new-docset).
I'm developing on an Apple silicon Mac, but the conda environment is setup as a `x86_64` environment.
### Expected behavior
Documentation can be built and viewed locally.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17467/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17466/events
|
https://github.com/huggingface/transformers/pull/17466
| 1,251,972,263
|
PR_kwDOCUB6oc44pOQU
| 17,466
|
Adding LeViT Model by Facebook
|
{
"login": "AnugunjNaman",
"id": 42839570,
"node_id": "MDQ6VXNlcjQyODM5NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnugunjNaman",
"html_url": "https://github.com/AnugunjNaman",
"followers_url": "https://api.github.com/users/AnugunjNaman/followers",
"following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}",
"gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions",
"organizations_url": "https://api.github.com/users/AnugunjNaman/orgs",
"repos_url": "https://api.github.com/users/AnugunjNaman/repos",
"events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnugunjNaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for suggestion. I have done the requested changes. I need help here:\r\n- I cannot figure rendering of warning problem. If I give `enter` or `space`. There is `style error` of `white space`.\r\n- The image in `convnext.mdx` is added in HF Dataset repo in hub. I can give the image to add there.\r\n\r\nThe weights are also uploaded on hub correctly after change in `self.bn` -> `self.batch_norm`.\r\nPlease review and suggest the changes @NielsRogge ",
"CI error seems unrelated, thanks for your work, merging!"
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
Adds LeViT Model (a Vision Transformer) to HF Library. All the checkpoints are on my HF account hub.
Please review and suggest the required changes.
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17466/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17466",
"html_url": "https://github.com/huggingface/transformers/pull/17466",
"diff_url": "https://github.com/huggingface/transformers/pull/17466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17466.patch",
"merged_at": 1654095981000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17465/events
|
https://github.com/huggingface/transformers/issues/17465
| 1,251,832,657
|
I_kwDOCUB6oc5KnXNR
| 17,465
|
Add MVP model
|
{
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hello! I think this is a situation where the code-on-the-hub approach is likely to be the best way to share it! See this guide: [Sharing custom models](https://huggingface.co/docs/transformers/custom_models).\r\n\r\nThis will give you the most amount of freedom with regards to your model.\r\n\r\ncc @sgugger ",
"Thank you! I will take a look.",
"Great! Please let us know if you run into any issues, or if anything is unclear from the guide. Thanks!",
"Hi, sorry I still don't know how to handle a situation like mine.\r\n\r\nMy model is \r\n```\r\n(\r\nBertModel()\r\nnn.Linear()\r\n)\r\n```\r\n\r\nAnd my checkpoint is composed of\r\n```\r\n(\r\nBertModel()\r\n'a': nn.Linear()\r\n'b': nn.Linear()\r\n'c': nn.Linear()\r\n)\r\n```\r\n\r\nAnd my expected effect is that user can determine which Linear to use through the config, such as `XXX.from_pretrained('my_model', linear='a' or 'b' or 'c')` to load according linear.\r\nHowever, the model initilization is before the model loading. So I don't know how to solve it.",
"A quick solution would be to have three architectures, one with `a`, one with `b`, and one with `c` linear layers and load the one you want accordingly.",
"Do you mean I upload three models, named my-model-a, my-model-b and my-model-c, then load the one I want?\r\n\r\n",
"You can upload one single checkpoint, but three different architectures that each use the `a`, `b`, and `c`. Similarly to models trained on sequence classification (or any other task): they can be loaded in base models, in question answering models, etc. It will only load the layers it needs, and ignore the rest.",
"Thanks for your response! I understand what you mean.\r\n\r\nActually, my situation is a little different from the task head. The prompt is the part of the encoder and decoder, and I have 8 different prompts rather than 3. So I wonder if there is any other way, considering they have the same architecture."
] | 1,653
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
### Model description
Hi, I want to add my model MVP to hugging face. My model is very similar with BART. The difference is that it has pre-trained soft prompt in the format of Prefix-tuning.
We have pretrained a series of prompts combining with backbone transformer. And users can choose which prompt to load. So I wonder where I can change the behaviour of loading which prompt?
To put it simply, our model is composed of a big nn.Transformer and a small nn.Linear. We have pre-trained a Transformer and _**n**_ Linear. Users can choose which Linear to load.
One solution can be that, we upload _**n**_ chechpoints, including the common Transformer and respective Linear. And users just write `model.from.pretrained('model1/2/3')` to load which prompt. However, the it is a bit waste to upload and download the common Transformer **_n_** times.
So I wonder how I can solve it with a simple method. Such as our uploaded model is consist of Transformer and **_n_** prompts, and users can choose which prompt to use through config.
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17465/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17464/events
|
https://github.com/huggingface/transformers/issues/17464
| 1,251,824,143
|
I_kwDOCUB6oc5KnVIP
| 17,464
|
ValueError: AlbertForMaskedLM does not support gradient checkpointing.
|
{
"login": "warm-ice0x00",
"id": 67120113,
"node_id": "MDQ6VXNlcjY3MTIwMTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/67120113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/warm-ice0x00",
"html_url": "https://github.com/warm-ice0x00",
"followers_url": "https://api.github.com/users/warm-ice0x00/followers",
"following_url": "https://api.github.com/users/warm-ice0x00/following{/other_user}",
"gists_url": "https://api.github.com/users/warm-ice0x00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/warm-ice0x00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/warm-ice0x00/subscriptions",
"organizations_url": "https://api.github.com/users/warm-ice0x00/orgs",
"repos_url": "https://api.github.com/users/warm-ice0x00/repos",
"events_url": "https://api.github.com/users/warm-ice0x00/events{/privacy}",
"received_events_url": "https://api.github.com/users/warm-ice0x00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
ValueError: AlbertForMaskedLM does not support gradient checkpointing.
`model.gradient_checkpointing_enable()` doesn't help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17464/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17463/events
|
https://github.com/huggingface/transformers/issues/17463
| 1,251,549,917
|
I_kwDOCUB6oc5KmSLd
| 17,463
|
transformers model doesn't output zeros for padded subtokens
|
{
"login": "dayyass",
"id": 26326659,
"node_id": "MDQ6VXNlcjI2MzI2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/26326659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dayyass",
"html_url": "https://github.com/dayyass",
"followers_url": "https://api.github.com/users/dayyass/followers",
"following_url": "https://api.github.com/users/dayyass/following{/other_user}",
"gists_url": "https://api.github.com/users/dayyass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dayyass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dayyass/subscriptions",
"organizations_url": "https://api.github.com/users/dayyass/orgs",
"repos_url": "https://api.github.com/users/dayyass/repos",
"events_url": "https://api.github.com/users/dayyass/events{/privacy}",
"received_events_url": "https://api.github.com/users/dayyass/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n> \r\n> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.\r\n\r\nping",
"Hi @dayyass ,\r\n\r\nIndeed, I don't think there is a condition on embeddings that would be at an masked position. But as you have shown in your two tests:\r\n```python\r\nprint(torch.allclose(tensor_1[1], tensor_2[1].squeeze(0), atol=1e-05)) # True\r\nprint(torch.allclose(tensor_1[0][:6], tensor_2[0].squeeze(0), atol=1e-05)) # True\r\n```\r\n\r\nThe most important thing is that the embeddings of the unmasked positions are identical, no matter how many pad tokens are added at the end of the sequence - which is verified on your tests.\r\n\r\nDid I miss something in your question? Why do you expect that embeddings corresponding to padded subtokens would be equal to zero?",
"Hi, @SaulLu!\r\nThanks for answering!\r\n\r\nNo, I put all information in issue.\r\n\r\nI assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.\r\n",
"Still actual issue.",
"> I assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.\r\n\r\nThanks for the details, in your case, I think the attention mask is exactly what you're looking for. If you ever want your output to have the value set to 0 for pad tokens, you can multiply the output by the attention mask. In your example:\r\n```python\r\ntensor_1 * tokens_1.attention_mask.unsqueeze(2)\r\n```\r\n",
"> > I assume that embeddings corresponding to padded subtokens should be equal to zero, because it is easier to find such embeddings and exclude those from averaging. But if we doesn't exclude those, it is better (most likely) to average embeddings when it is equal to zero.\r\n> \r\n> Thanks for the details, in your case, I think the attention mask is exactly what you're looking for. If you ever want your output to have the value set to 0 for pad tokens, you can multiply the output by the attention mask. In your example:\r\n> \r\n> ```python\r\n> tensor_1 * tokens_1.attention_mask.unsqueeze(2)\r\n> ```\r\n\r\nThat's an excellent and elegant solution, thank you for that!\r\nI guess the issue might be closed.",
"Glad it helped you! :hugs: "
] | 1,653
| 1,658
| 1,658
|
NONE
| null |
### System Info
I have noticed that mean pooling over last_hidden_state returns different results depending on the batch size. When I started to go deeper to understand the reason of the problem I found that padded by tokenizer subtokens after passing it through the model are not zeros and actually this leads to the problem behavior. Then batch size = 1 there is no padded subtokens, so results are 100% correct, but batch size = 1 leads to much longer computations (obviously).
Is it correct behavior and is it possible to avoid this?
The snippet below uses `bert-base-uncased` model, but this behavior also related to other models.
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python3
import torch
from transformers import AutoModel, AutoTokenizer
model_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
sentences = [
"Hello, World!",
"How are you doing today?",
]
tokenizer_kwargs = {
'return_tensors': 'pt',
'padding': True,
'truncation': True,
'max_length': 512,
}
tokens_1 = tokenizer(sentences, **tokenizer_kwargs)
tokens_2 = []
for sentence in sentences:
tokens = tokenizer(sentence, **tokenizer_kwargs)
tokens_2.append(tokens)
with torch.no_grad():
tensor_1 = model(**tokens_1)['last_hidden_state']
tensor_2 = []
for tokens in tokens_2:
with torch.no_grad():
tensor = model(**tokens)['last_hidden_state']
tensor_2.append(tensor)
print(torch.allclose(tensor_1[1], tensor_2[1].squeeze(0), atol=1e-05)) # True
print(torch.allclose(tensor_1[0][:6], tensor_2[0].squeeze(0), atol=1e-05)) # True
print(tensor_1[0][6:]) # padded tokens
```
### Expected behavior
I expect that embeddings corresponding to padded subtokens would be equal to zero.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17463/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17462/events
|
https://github.com/huggingface/transformers/pull/17462
| 1,251,476,982
|
PR_kwDOCUB6oc44ns3X
| 17,462
|
Add EfficientNet model for PyTorch
|
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Working on this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds the EfficientNet model family to HuggingFace transformers proposed in #15759 (PyTorch only for this PR).
The implementation is based on that of Ross Wightman's pytorch-image-models [implementation](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/efficientnet.py).
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17462/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17462",
"html_url": "https://github.com/huggingface/transformers/pull/17462",
"diff_url": "https://github.com/huggingface/transformers/pull/17462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17462.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17461/events
|
https://github.com/huggingface/transformers/issues/17461
| 1,251,456,758
|
I_kwDOCUB6oc5Kl7b2
| 17,461
|
Spanish docs - Links don't work
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@omarespejel thanks a lot for notifying. I'm working on a fix"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
## TLDR
Links to docs that don't exist (eg ./main_classes/pipelines) don't work. Issue first mentioned in #17349.
## Description
Currently, the links lead to an error if the doc is not yet translated. For example, this fragment in [`autoclass_tutorial`](https://huggingface.co/docs/transformers/main/es/autoclass_tutorial) leads a to an error for not having [model_doc/auto.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/auto.mdx)
translated yet.
> Finalmente, las clases AutoModelFor te permiten cargar un modelo preentrenado para una tarea dada (revisa [aquí](https://huggingface.co/docs/transformers/main/es/model_doc/auto) para conocer la lista completa de tareas disponibles).
## Possible solution
Linking to the English docs while the Spanish versions become available.
## Possible reviewers
@sgugger @mishig25
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17461/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17461/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17460/events
|
https://github.com/huggingface/transformers/issues/17460
| 1,251,301,822
|
I_kwDOCUB6oc5KlVm-
| 17,460
|
fx.symbolic_trace not working for Roberta
|
{
"login": "WeiHao97",
"id": 37089196,
"node_id": "MDQ6VXNlcjM3MDg5MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/37089196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiHao97",
"html_url": "https://github.com/WeiHao97",
"followers_url": "https://api.github.com/users/WeiHao97/followers",
"following_url": "https://api.github.com/users/WeiHao97/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiHao97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiHao97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiHao97/subscriptions",
"organizations_url": "https://api.github.com/users/WeiHao97/orgs",
"repos_url": "https://api.github.com/users/WeiHao97/repos",
"events_url": "https://api.github.com/users/WeiHao97/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiHao97/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"After specifying: \r\n```python\r\ninput_names = ['input_ids',\r\n 'attention_mask',\r\n 'token_type_ids',\r\n 'position_ids',\r\n 'encoder_hidden_states',\r\n 'encoder_attention_mask',\r\n 'labels'] \r\n```\r\nI saw another bug in:\r\n```python\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 22>()\r\n 14 model = RobertaForMaskedLM(config)\r\n 15 input_names = ['input_ids',\r\n 16 'attention_mask',\r\n 17 'token_type_ids',\r\n (...)\r\n 20 'encoder_attention_mask',\r\n 21 'labels']\r\n---> 22 gm = symbolic_trace(model,input_names)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:595, in symbolic_trace(model, input_names)\r\n 593 tracer = HFTracer()\r\n 594 traced_graph = tracer.trace(model, concrete_args=concrete_args)\r\n--> 595 traced = torch.fx.GraphModule(model, traced_graph)\r\n 597 return traced\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:312, in GraphModule.__init__(self, root, graph, class_name)\r\n 309 else:\r\n 310 raise RuntimeError('Unsupported type ' + str(root) + ' passed for root!')\r\n--> 312 self.graph = graph\r\n 314 # Store the Tracer class responsible for creating a Graph separately as part of the\r\n 315 # GraphModule state, except when the Tracer is defined in a local namespace.\r\n 316 # Locally defined Tracers are not pickleable. This is needed because torch.package will\r\n 317 # serialize a GraphModule without retaining the Graph, and needs to use the correct Tracer\r\n 318 # to re-create the Graph during deserialization.\r\n 319 self._tracer_cls = None\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1225, in Module.__setattr__(self, name, value)\r\n 1223 buffers[name] = value\r\n 1224 else:\r\n-> 1225 object.__setattr__(self, name, value)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:346, in GraphModule.graph(self, g)\r\n 344 self._graph = g\r\n 345 g.owning_module = self\r\n--> 346 self.recompile()\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:562, in GraphModule.recompile(self)\r\n 560 self._in_spec = self._graph._pytree_info.in_spec\r\n 561 self._out_spec = self._graph._pytree_info.out_spec\r\n--> 562 python_code = self._graph.python_code(root_module='self')\r\n 563 self._code = python_code.src\r\n 565 cls = type(self)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:834, in Graph.python_code(self, root_module)\r\n 831 node._repr_fn = orig_repr_fns[node]\r\n 833 with override_node_repr(self):\r\n--> 834 return self._python_code(root_module, namespace)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:988, in Graph._python_code(self, root_module, namespace)\r\n 983 raise NotImplementedError(f'node: {node.op} {node.target}')\r\n 985 for node in self.nodes:\r\n 986 # NOTE: emit_node does not emit a string with newline. It depends\r\n 987 # on delete_unused_values to append one\r\n--> 988 emit_node(node)\r\n 989 delete_unused_values(node)\r\n 991 if len(body) == 0:\r\n 992 # If the Graph has no non-placeholder nodes, no lines for the body\r\n 993 # have been emitted. To continue to have valid Python code, emit a\r\n 994 # single pass statement\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:928, in Graph._python_code.<locals>.emit_node(node)\r\n 927 def emit_node(node : Node):\r\n--> 928 maybe_type_annotation = '' if node.type is None else f' : {type_repr(node.type)}'\r\n 929 if node.op == 'placeholder':\r\n 930 assert isinstance(node.target, str)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in <listcomp>(.0)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/typing.py:711, in _BaseGenericAlias.__getattr__(self, attr)\r\n 709 if '__origin__' in self.__dict__ and not _is_dunder(attr):\r\n 710 return getattr(self.__origin__, attr)\r\n--> 711 raise AttributeError(attr)\r\n\r\nAttributeError: __args__\r\n```\r\n\r\n@michaelbenayoun Do you mind take a look? Thank you very much!",
"Hi @WeiHao97 ,\r\n\r\nTo trace the model, you need to specify the input names and not the concrete args, the input_names represent the inputs your traced model will take.\r\nIf you want to trace Roberta, you most likely need to only provide \"input_ids\" and \"attention_mask\":\r\n\r\n```python\r\nmodel = RobertaForMaskedLM(config)\r\ninput_names = [\"input_ids\", \"attention_mask\"]\r\ngm = fx.symbolic_trace(model, input_names)\r\n```",
"> Hi @WeiHao97 ,\r\n> \r\n> To trace the model, you need to specify the input names and not the concrete args, the input_names represent the inputs your traced model will take. If you want to trace Roberta, you most likely need to only provide \"input_ids\" and \"attention_mask\":\r\n> \r\n> ```python\r\n> model = RobertaForMaskedLM(config)\r\n> input_names = [\"input_ids\", \"attention_mask\"]\r\n> gm = fx.symbolic_trace(model, input_names)\r\n> ```\r\n\r\nAfter specifying input_names = ['input_ids',\r\n 'attention_mask',\r\n 'token_type_ids',\r\n 'position_ids',\r\n 'encoder_hidden_states',\r\n 'encoder_attention_mask',\r\n 'labels'] or input_names = [\"input_ids\", \"attention_mask\"], I got the second error.",
"Could you share the error message please?\r\nAlso what are you trying to do? It seems to be an odd set of inputs you want to use.\r\n\r\nThat's weird that you get an error with this:\r\n```python\r\nmodel = RobertaForMaskedLM(config)\r\ninput_names = [\"input_ids\", \"attention_mask\"]\r\ngm = fx.symbolic_trace(model, input_names)\r\n```\r\nI am able to get this to work on my end.\r\n",
"> Could you share the error message please? Also what are you trying to do? It seems to be an odd set of inputs you want to use.\r\n> \r\n> That's weird that you get an error with this:\r\n> \r\n> ```python\r\n> model = RobertaForMaskedLM(config)\r\n> input_names = [\"input_ids\", \"attention_mask\"]\r\n> gm = fx.symbolic_trace(model, input_names)\r\n> ```\r\n> \r\n> I am able to get this to work on my end.\r\n\r\nI ran with transformers version '4.18.0' :\r\n```python\r\nfrom transformers import RobertaForMaskedLM\r\nfrom transformers import RobertaConfig\r\nfrom transformers.utils import fx\r\n\r\nconfig = RobertaConfig(\r\n vocab_size=52_000,\r\n max_position_embeddings=514,\r\n num_attention_heads=12,\r\n num_hidden_layers=12,\r\n type_vocab_size=1,\r\n)\r\n\r\nmodel = RobertaForMaskedLM(config)\r\ninput_names = [\"input_ids\", \"attention_mask\"]\r\ngm = fx.symbolic_trace(model, input_names)\r\n```\r\nand got:\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nInput In [1], in <cell line: 15>()\r\n 13 model = RobertaForMaskedLM(config)\r\n 14 input_names = [\"input_ids\", \"attention_mask\"]\r\n---> 15 gm = fx.symbolic_trace(model, input_names)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:595, in symbolic_trace(model, input_names)\r\n 593 tracer = HFTracer()\r\n 594 traced_graph = tracer.trace(model, concrete_args=concrete_args)\r\n--> 595 traced = torch.fx.GraphModule(model, traced_graph)\r\n 597 return traced\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:312, in GraphModule.__init__(self, root, graph, class_name)\r\n 309 else:\r\n 310 raise RuntimeError('Unsupported type ' + str(root) + ' passed for root!')\r\n--> 312 self.graph = graph\r\n 314 # Store the Tracer class responsible for creating a Graph separately as part of the\r\n 315 # GraphModule state, except when the Tracer is defined in a local namespace.\r\n 316 # Locally defined Tracers are not pickleable. This is needed because torch.package will\r\n 317 # serialize a GraphModule without retaining the Graph, and needs to use the correct Tracer\r\n 318 # to re-create the Graph during deserialization.\r\n 319 self._tracer_cls = None\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1225, in Module.__setattr__(self, name, value)\r\n 1223 buffers[name] = value\r\n 1224 else:\r\n-> 1225 object.__setattr__(self, name, value)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:346, in GraphModule.graph(self, g)\r\n 344 self._graph = g\r\n 345 g.owning_module = self\r\n--> 346 self.recompile()\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph_module.py:562, in GraphModule.recompile(self)\r\n 560 self._in_spec = self._graph._pytree_info.in_spec\r\n 561 self._out_spec = self._graph._pytree_info.out_spec\r\n--> 562 python_code = self._graph.python_code(root_module='self')\r\n 563 self._code = python_code.src\r\n 565 cls = type(self)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:834, in Graph.python_code(self, root_module)\r\n 831 node._repr_fn = orig_repr_fns[node]\r\n 833 with override_node_repr(self):\r\n--> 834 return self._python_code(root_module, namespace)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:988, in Graph._python_code(self, root_module, namespace)\r\n 983 raise NotImplementedError(f'node: {node.op} {node.target}')\r\n 985 for node in self.nodes:\r\n 986 # NOTE: emit_node does not emit a string with newline. It depends\r\n 987 # on delete_unused_values to append one\r\n--> 988 emit_node(node)\r\n 989 delete_unused_values(node)\r\n 991 if len(body) == 0:\r\n 992 # If the Graph has no non-placeholder nodes, no lines for the body\r\n 993 # have been emitted. To continue to have valid Python code, emit a\r\n 994 # single pass statement\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:928, in Graph._python_code.<locals>.emit_node(node)\r\n 927 def emit_node(node : Node):\r\n--> 928 maybe_type_annotation = '' if node.type is None else f' : {type_repr(node.type)}'\r\n 929 if node.op == 'placeholder':\r\n 930 assert isinstance(node.target, str)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in <listcomp>(.0)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/torch/fx/graph.py:885, in Graph._python_code.<locals>.type_repr(o)\r\n 882 origin_typename = add_global(_type_repr(origin_type), origin_type)\r\n 884 # Assign global names for each of the inner type variables.\r\n--> 885 args = [type_repr(arg) for arg in o.__args__]\r\n 887 return f'{origin_typename}[{\",\".join(args)}]'\r\n 889 # Common case: this is a regular module name like 'foo.bar.baz'\r\n\r\nFile ~/anaconda3/lib/python3.9/typing.py:711, in _BaseGenericAlias.__getattr__(self, attr)\r\n 709 if '__origin__' in self.__dict__ and not _is_dunder(attr):\r\n 710 return getattr(self.__origin__, attr)\r\n--> 711 raise AttributeError(attr)\r\n\r\nAttributeError: __args__\r\n```",
"Could you try with transformers sources?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
transformers version: 4.18.0
PyTorch version: 1.10.2
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA T500
Nvidia driver version: 472.91
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.10.2
[pip3] torchvision==0.11.3
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
[conda] pytorch 1.10.2 cpu_py39hfa7516b_0
[conda] torchvision 0.11.3 py39_cu113 pytorch
```
### Who can help?
@LysandreJi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
from transformers.utils import fx
import inspect
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=12,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config)
input_names = ["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask",'token_type_ids']
sig = inspect.signature(model.forward)
concrete_args = {p.name: None for p in sig.parameters.values() if p.name not in input_names}
gm = fx.symbolic_trace(model,concrete_args)
```
### Expected behavior
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 gm = fx.symbolic_trace(model,concrete_args)
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:594, in symbolic_trace(model, input_names)
592 # Tracing.
593 tracer = HFTracer()
--> 594 traced_graph = tracer.trace(model, concrete_args=concrete_args)
595 traced = torch.fx.GraphModule(model, traced_graph)
597 return traced
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:470, in HFTracer.trace(self, root, concrete_args, method_names)
467 sig = inspect.signature(root.forward)
468 input_names = sig.parameters.keys() - concrete_args.keys()
--> 470 self.record(root, input_names, method_names=method_names)
472 # TODO: adapt the way leaf function are wrapped with the "autowrap function" feature from Tracer.
473 autowrap_functions = [patched for (_, _, patched) in self._leaf_functions_register.values()]
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:426, in HFTracer.record(self, model, input_names, method_names)
423 cache_names, original_methods = self._monkey_patch_tensor_methods_for_model_recording(model, method_names)
424 self.original_methods = original_methods
--> 426 model(**inputs)
428 _reset_tensor_methods(original_methods)
430 self.recorded_methods = {
431 method_name: cache_name for method_name, cache_name in cache_names.items() if hasattr(model, cache_name)
432 }
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:1098, in RobertaForMaskedLM.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, output_attentions, output_hidden_states, return_dict)
1088 r"""
1089 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1090 Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
(...)
1094 Used to hide legacy arguments that have been deprecated.
1095 """
1096 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1098 outputs = self.roberta(
1099 input_ids,
1100 attention_mask=attention_mask,
1101 token_type_ids=token_type_ids,
1102 position_ids=position_ids,
1103 head_mask=head_mask,
1104 inputs_embeds=inputs_embeds,
1105 encoder_hidden_states=encoder_hidden_states,
1106 encoder_attention_mask=encoder_attention_mask,
1107 output_attentions=output_attentions,
1108 output_hidden_states=output_hidden_states,
1109 return_dict=return_dict,
1110 )
1111 sequence_output = outputs[0]
1112 prediction_scores = self.lm_head(sequence_output)
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:851, in RobertaModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
842 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
844 embedding_output = self.embeddings(
845 input_ids=input_ids,
846 position_ids=position_ids,
(...)
849 past_key_values_length=past_key_values_length,
850 )
--> 851 encoder_outputs = self.encoder(
852 embedding_output,
853 attention_mask=extended_attention_mask,
854 head_mask=head_mask,
855 encoder_hidden_states=encoder_hidden_states,
856 encoder_attention_mask=encoder_extended_attention_mask,
857 past_key_values=past_key_values,
858 use_cache=use_cache,
859 output_attentions=output_attentions,
860 output_hidden_states=output_hidden_states,
861 return_dict=return_dict,
862 )
863 sequence_output = encoder_outputs[0]
864 pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
File ~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1102, in Module._call_impl(self, *input, **kwargs)
1098 # If we don't have any hooks, we want to skip the rest of the logic in
1099 # this function, and just call forward.
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py:492, in RobertaEncoder.forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
479 def forward(
480 self,
481 hidden_states: torch.Tensor,
(...)
490 return_dict: Optional[bool] = True,
491 ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
--> 492 all_hidden_states = () if output_hidden_states else None
493 all_self_attentions = () if output_attentions else None
494 all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:326, in HFTracer._wrap_method_for_model_recording.<locals>.wrapped(*args, **kwargs)
324 setattr(model, cache_name, [])
325 cache = getattr(model, cache_name)
--> 326 res = method(*args, **kwargs)
327 cache.append(res)
328 return res
File ~/anaconda3/lib/python3.9/site-packages/transformers/utils/fx.py:326, in HFTracer._wrap_method_for_model_recording.<locals>.wrapped(*args, **kwargs)
324 setattr(model, cache_name, [])
325 cache = getattr(model, cache_name)
--> 326 res = method(*args, **kwargs)
327 cache.append(res)
328 return res
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17460/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17459/events
|
https://github.com/huggingface/transformers/issues/17459
| 1,251,151,485
|
I_kwDOCUB6oc5Kkw59
| 17,459
|
Tranformers documentation translation to Italian
|
{
"login": "mfumanelli",
"id": 53374883,
"node_id": "MDQ6VXNlcjUzMzc0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfumanelli",
"html_url": "https://github.com/mfumanelli",
"followers_url": "https://api.github.com/users/mfumanelli/followers",
"following_url": "https://api.github.com/users/mfumanelli/following{/other_user}",
"gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions",
"organizations_url": "https://api.github.com/users/mfumanelli/orgs",
"repos_url": "https://api.github.com/users/mfumanelli/repos",
"events_url": "https://api.github.com/users/mfumanelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfumanelli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @mfumanelli if it has not yet been taken I can translate pipeline_tutorial.mdx (only the textual part).\r\n",
"Hey, @nickprock thank you! That would be great. I added you to the list of contributors in @mfumanelli 's tasks list. The textual part is ok :)",
"Hey @mfumanelli in the next week I can translate preprocessing and training.",
"> Hey @mfumanelli in the next week I can translate preprocessing and training.\r\n\r\nPerfect @nickprock 🌈 🤗 thanks!",
"Hey @mfumanelli if you have other documents to assign to me I'm ready",
"> \r\n\r\nHi @nickprock! <3 I saw the pull request for preprocessing but not for training, is that still in WIP or is there a PR I missed? ",
"@mfumanelli I hope to submit the PR for training tomorrow.",
"> \r\n\r\nSuper! Thanks @nickprock, if it's ok for you I will assign you the multilingual doc. I also asked @omarespejel if there are any priority docs to be translated, so that we can add them to the issue or if we can proceed without priority with all the other docs 🌈",
"Hi @mfumanelli, if you have any file to translate I'd be happy to help :)",
"Hi @mfumanelli I would be happy to help in translating the documentation if it was possible!",
"@mfumanelli thanks! 🤗 I just added the next priority docs to the main comment on this issue.",
"> Hi @mfumanelli, if you have any file to translate I'd be happy to help :)\r\n\r\nHi @andreafailla! If it is ok with you, you can start translating the file: [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx). If you have any doubts write to me :) Thanks for your help! 🌈🌈",
"> Hi @mfumanelli I would be happy to help in translating the documentation if it was possible!\r\n\r\nHi @F02934 thanks!! 🚀 is it OK for you to start by translating the file [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx)?",
"Hi @mfumanelli, if there's something for me to work on I would be really glad to help! :)",
"> Hi @mfumanelli, if there's something for me to work on I would be really glad to help! :)\r\n\r\nHi @Xpiri! perfect, it would be perfect if you could start translating the file: [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx) 🚀 thank you very much for your contribution 🌈",
"@mfumanelli Perfect I can start translating right now !",
"Hi @mfumanelli, I should be able to start working on some translations in the next few days.",
"> Hi @mfumanelli, I should be able to start working on some translations in the next few days.\r\nHi @lorenzobalzani thanks! If it's ok for you, you can start translating the run_scripts.mdx file 🌈 let me know! \r\n",
"Hi @mfumanelli I hope you are well :) You should see my PR by now. \r\n\r\nIf that's ok, i can take on the [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx) and [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx) and PR them tomorrow :)\r\nThanks for the great work!",
"> Hi @mfumanelli I hope you are well :) You should see my PR by now.\r\n> \r\n> If that's ok, i can take on the [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx) and [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx) and PR them tomorrow :) Thanks for the great work!\r\n\r\nHi @andreafailla ! Thank you very much 🌈 ! can you start with the sagemaker one? In the previous comment I asked Lorenzo if the one from run_scripts would be ok for him, I will now add his name to the list on the issue! Thanks 🚀🚀",
"@mfumanelli yes, that's ok for me! ",
"Hi @mfumanelli ! You should see my pull request for [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx), please let me know if everything is fine and if I can help some more! 🚀 ",
"Hi @Xpiri! Perfect, if you want you can translate the doc: converting_tensorflow_models.mdx 🚀 🚀 I will now have a look at your PR, thanks! 🔥 ",
"Hello @mfumanelli I made a my pull up request. It's my first time so you can give me some pointers if something not good. If everything is fine I'd like to help more so I can improve myself!",
"@F02934 🌈 thank you for your contribution, if it is ok with you, you can translate the file: serialization.mdx",
"@mfumanelli sure I will!",
"@mfumanelli I submitted my PR, let me know if there are some translations unclear. ",
"Hi @mfumanelli! ",
"🌈 Hi @machicomio 🌈 If you want you can start by translating the performance.mdx file, if you have any doubts do not hesitate to write to me 🚀",
"Hi @mfumanelli, is migration.mdx already taken?"
] | 1,653
| 1,679
| null |
CONTRIBUTOR
| null |
Hi!
Let's bring the documentation to all the Italian-speaking community :)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list.
Some notes:
- Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). For example, use Tu instead of Lei.
- Please translate in a gender-neutral way.
- Add your translations to the folder called `it` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
- Register your translation in [it/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/it/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
- Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue.
- 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) @mfumanelli
- [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). @mfumanelli
- [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). @mfumanelli
## Tutorial section
- [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) @nickprock
- [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) @mfumanelli
- [x] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) @nickprock
- [x] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) @nickprock
- [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) @mfumanelli
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) WIP @mfumanelli
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) WIP @nickprock
## How-to guides
- [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx "fast_tokenizers.mdx") @andreafailla
- [ ] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx "create_a_model.mdx") WIP @F02934
- [x] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx "custom_models.mdx") @Xpiri
- [x] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx "run_scripts.mdx") @lorenzobalzani
- [x] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx "sagemaker.mdx") @andreafailla
- [ ] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx "converting_tensorflow_models.mdx") WIP @Xpiri
- [ ] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx "serialization.mdx") WIP @F02934
- [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx "performance.mdx") WIP @machicomio
- [ ] [perf_train_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.mdx)
- [ ] [perf_train_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_many.mdx)
- [ ] [perf_train_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu.mdx)
- [ ] [perf_train_cpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_cpu_many.mdx)
- [ ] [perf_train_tpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_tpu.mdx)
- [ ] [perf_train_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_special.mdx)
- [ ] [perf_infer_cpu](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_cpu.mdx)
- [ ] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)
- [ ] [perf_infer_gpu_many](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_many.mdx)
- [ ] [perf_infer_special](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_special.mdx)
- [ ] [perf_hardware](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_hardware.mdx)
- [ ] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx)
- [x] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx "parallelism.mdx") WIP @Xpiri
- [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx "benchmarks.mdx") WIP @mfumanelli
- [ ] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx "migration.mdx") WIP @Baelish03
- [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx "troubleshooting.mdx") WIP @F02934
- [ ] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx "debugging.mdx") WIP @nickprock
- [ ] notebooks
- [ ] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx "community.mdx") WIP @lorenzobalzani
- [ ] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx "docs/source/en/add_new_model.mdx") WIP @Steboss89
- [ ] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx "add_new_pipeline.mdx")
- [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx "testing.mdx")
- [ ] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx "pr_checks.mdx")
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17459/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17459/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17458/events
|
https://github.com/huggingface/transformers/pull/17458
| 1,250,984,470
|
PR_kwDOCUB6oc44mD5b
| 17,458
|
TF: XLA Beam Search
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17458). All of your documentation changes will be reflected on that endpoint.",
"Closed in favor of #17857"
] | 1,653
| 1,657
| 1,656
|
MEMBER
| null |
# What does this PR do?
WIP
Dependencies:
1. #17426
2. #17479
Status log:
1. 2022/05/27: XLA compiles, but fails at runtime. The same code runs in eager execution. The XLA compiler seems like it is not able to pick up the right information during beam search, as it complains inside the forward pass on things that work fine with greedy search/sample. Maybe it's GPT-2 specific. Going to fix enable XLA on BART and retry there.
2. 2022/05/30: Now with BART. Same thing -- it compiles, but gets confused at run time. The exact same code path, with eager execution, runs fine. I suspect the cache creation inside the while loop doesn't help.
Ideas yet to explore:
1. Split past cache creation from past cache update, and create the cache before the loop. XLA expects variable creation at trace time, and we can do it with the proper separation;
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17458/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17458/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17458",
"html_url": "https://github.com/huggingface/transformers/pull/17458",
"diff_url": "https://github.com/huggingface/transformers/pull/17458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17458.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17457/events
|
https://github.com/huggingface/transformers/pull/17457
| 1,250,941,667
|
PR_kwDOCUB6oc44l6YU
| 17,457
|
[Json configs] Make json prettier for all saved tokenizer files & ensure same json format for all processors (tok + feat_extract)
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@julien-c @sgugger do you think it could make sense to make a huge automated PR creation to correct all tokenizer configs? Or maybe too much given that we have 80,000 checkpoints?\r\n\r\nDon't think it's possible to break anything, but still not sure if it makes sense",
"would be a good stress test i guess =)"
] | 1,653
| 1,654
| 1,654
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As an example, see: https://huggingface.co/facebook/wav2vec2-base-100h/commit/9c1fef36b62a428a658e5b022ef9f21b38f47e0b
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17457/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17457",
"html_url": "https://github.com/huggingface/transformers/pull/17457",
"diff_url": "https://github.com/huggingface/transformers/pull/17457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17457.patch",
"merged_at": 1654009651000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17455/events
|
https://github.com/huggingface/transformers/issues/17455
| 1,250,548,040
|
I_kwDOCUB6oc5KidlI
| 17,455
|
ProphetNet inconsistent with changing batch ordering
|
{
"login": "mikkelfo",
"id": 48285156,
"node_id": "MDQ6VXNlcjQ4Mjg1MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/48285156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikkelfo",
"html_url": "https://github.com/mikkelfo",
"followers_url": "https://api.github.com/users/mikkelfo/followers",
"following_url": "https://api.github.com/users/mikkelfo/following{/other_user}",
"gists_url": "https://api.github.com/users/mikkelfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikkelfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikkelfo/subscriptions",
"organizations_url": "https://api.github.com/users/mikkelfo/orgs",
"repos_url": "https://api.github.com/users/mikkelfo/repos",
"events_url": "https://api.github.com/users/mikkelfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikkelfo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @mikkelfo,\r\n\r\nThis indeed seems like a bug! I sadly won't be able to look into this any time soon - I'll mark the issue as a \"good second issue\" in case someone wants to give it a try :-). Also cc @patil-suraj ",
"Thanks for the answer @patrickvonplaten. I suspected so, although I did question my sanity for a moment. I might have some time to look into this myself as I'm using it for my thesis.\r\n\r\nI've done a little digging and the issue appears to start during the self-attention of the encoder, more specifically after [line 700](https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/models/prophetnet/modeling_prophetnet.py#L700). It appears to me that the reshaping into the attention heads (with `.view(*proj_shape)`)causes the issue, as I am unable to invert things back to the original form after that point. ",
"Sorry for confusing you with this. To be honest, we've ported a relatively messy Prophetnet implementation and never fully tested it. If you have some time to dive into it, I'd suggest to add a test that ensures that changing the order of sequences in a batch still gives the same results (that's a great test btw) and then trying to correct this test while making sure that all other tests run correctly.\r\n\r\nHappy to help you if you're stuck. Note the original implementation is here:\r\n- https://github.com/microsoft/ProphetNet\r\n\r\nThe original author is @qiweizhen I think, we could maybe also ask him if you run into problems :-) ",
"After some more digging, the issue appears to be centered around ProphetNetAttention and ProphetNetNgramSelfAttention (so both the encoder and the decoder suffers from this). The fix essentially boils down to changing the `proj_shape = (batch_size * self.num_attn_heads, -1, self.head_dim)` to `proj_shape = (batch_size, self.num_attn_heads, -1, self.head_dim)` and adapting the rest of the code to fit this. Keeping the dimensions seperate is also utilised in other transformer models (Took inspiration from BERT) and avoids potential mishaps with reshapes. \r\n\r\nFor the encoder, this is relatively simple as it only constitues changing the ProphetNetAttention [forward pass](https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/models/prophetnet/modeling_prophetnet.py#L655) and adapting the [attention mask](https://github.com/huggingface/transformers/blob/v4.19.2/src/transformers/models/prophetnet/modeling_prophetnet.py#L1317) to these new dimensions. I have done this, but it is only a fix for the encoder, so I don't know if that's worthy of a PR @patrickvonplaten? It greatly reduces the inconsistency to a few decimals (for this particular example), from a difference of 1.5 to 0.06. \r\n\r\nI am trying to do the same for the decoder part (ProphetNetNgramSelfAttention), but it is quite a bit more cumbersome. I'll keep trying for a bit, otherwise I'll simply offer my encoder fix for now. ",
"Hello again. I believe I have managed to fix both the encoder and decoder part, such that the hidden states are consistent. However, the loss computations of `ProphetNetForConditionalGeneration` and `ProphetNetForCasualLM` still differs (with a very slight amount). \r\n\r\nWhen the log_softmax is taken ([line 2014](https://github.com/huggingface/transformers/blob/6e535425feae20ca61a8b10ae5e8a7fab4d394ba/src/transformers/models/prophetnet/modeling_prophetnet.py#L2014)) the logits are equal when inversed, but the probabilities are not.\r\n\r\n```\r\n# logits = (n_gram, batch_size, seq_length, vocab_size)\r\nprint((logits == logits_inv.flip(1)).all()) # True (Flip on batch_size)\r\n\r\nlprobs = torch.nn.functional.log_softmax(logits.view(-1, logits.size(-1)),dim=-1,dtype=torch.float32,)\r\nlprobs_inv = torch.nn.functional.log_softmax(logits_inv.view(-1, logits.size(-1)),dim=-1,dtype=torch.float32,)\r\n\r\nprint(lprobs - lprobs_inv) # -4.7684e-07\r\n```\r\nI believe the .view part is cause the issue due to the collapsing of dimensions. As for the case of the other fixes, merging dimensions are most likely the cause, so seperating from (n_gram\\*batch_size\\*tokens) to (batch_size, n_gram\\*tokens) should fix the issue. It requires a bit more than changing `logits.view(-1, logits.size(-1))` to `logits.view(2, -1, logits.size(-1))`. I'm currently stuck on this, but I'm wondering if any of you can be of help? @patrickvonplaten @patil-suraj @qiweizhen\r\n\r\nIt is only these 3 lines that are left to fix. How can this be computed while keeping the batch dimension intact?\r\n```\r\n# [batch_size, n_gram, tokens, vocab_size] -> # [n_gram, batch_size, tokens, vocab_size]\r\nlogits = logits.transpose(0, 1).contiguous() \r\nlprobs = nn.functional.log_softmax(\r\n logits.view(-1, logits.size(-1)),\r\n dim=-1,\r\n dtype=torch.float32,\r\n)\r\n\r\nloss = nn.functional.nll_loss(lprobs, expend_targets.view(-1), reduction=\"mean\")\r\n```",
"the difference of `-4.7684e-07` is very small so IMO it can be ignored. `1e-4` is the threshold that we use in `Transformers`. So if the difference is less than that it should be fine I think. wdyt @patrickvonplaten ",
"I checked a bit further (with my implemented fixes). \r\n\r\nGiven two model outputs, where one is a batch and the other is single element (i.e. the first element in input_string), we expect that the first element of the batched output is equal to the single element output. Below is a code snippet of how it would look. \r\n\r\n```\r\noutputs = model(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, labels=labels, output_hidden_states=True)\r\n\r\nbatch_size = inputs.input_ids.shape[0]\r\nfor i in range(batch_size):\r\n single_outputs = model(input_ids=inputs.input_ids[i:i+1], attention_mask=inputs.attention_mask[i:i+1], labels=labels[i:i+1], output_hidden_states=True)\r\n print((single_outputs.encoder_last_hidden_state - outputs.encoder_last_hidden_state[i:i+1]).abs().mean())\r\n print((single_outputs.decoder_hidden_states[-1] - outputs.decoder_hidden_states[-1][i:i+1]).abs().mean())\r\n```\r\nWe expect the differences between the two outputs to be 0, but there is a difference between `1e-6` to `1e-7` for each element in the hidden_states (was `1e` to `1e-1` previously) when using some examples from the CNN/DailyMail dataset. My previous example had equal hidden states, but a slight variation in loss, whereas this example does not have equal hidden states. As @patil-suraj mentioned, it is within the threshold used. \r\n\r\nI don't have time to look into this anymore, but I am happy to create a pull request for now. While I dont believe the problem is fully fixed, my fixes makes the differences significantly less. If this is enough for a pull request, let me know and I'll create one @patrickvonplaten.\r\n",
"Hi @patil-suraj , @patrickvonplaten.\r\nIf it's ok I would like to help and try to solve this issue.",
"If you want @kiansierra, here's a [link](https://github.com/mikkelfo/multi-document-abstractive-summarization/blob/main/src/models/prophetnet_fixes.py#L9) for my ad-hoc fixes I did. \r\n\r\nI basically copy-pasted the source code and implemented the fixes, so you must figure out yourself where they are if you want. The function [prophetnet_fixes](https://github.com/mikkelfo/multi-document-abstractive-summarization/blob/main/src/models/prophetnet_fixes.py#L9) updates the affected functions for both the encoder and the decoder. "
] | 1,653
| 1,677
| 1,677
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
from transformers import ProphetNetForConditionalGeneration, ProphetNetTokenizer
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased')
input_string = ['Hello, my dog is cute', 'I have a cat that is not cute']
labels = ['My dog is cute', 'My cat is not cute']
inputs = tokenizer(input_string, return_tensors="pt", padding=True, truncation=True)
targets = tokenizer(labels, return_tensors="pt", padding=True, truncation=True)
# Inverse the ordering of the input and labels using [::-1]
inputs_inv = tokenizer(input_string[::-1], return_tensors="pt", padding=True, truncation=True)
targets_inv = tokenizer(labels[::-1], return_tensors="pt", padding=True, truncation=True)
output = model(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, labels=targets.input_ids)
output_inv = model(input_ids=inputs_inv.input_ids, attention_mask=inputs_inv.attention_mask, labels=targets_inv.input_ids)
print(output.loss.item())
print(output_inv.loss.item())
```
Given two different forward passes, where one of them is the inverse of the other, the two losses are different.
output.loss.item() = 5.023777484893799
output_inv.loss.item() = 6.5036540031433105
When comparing their encoder output (last hidden states), they are again not equal
```Python
enc_output = model.prophetnet.encoder(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask)
enc_output_inv = model.prophetnet.encoder(input_ids=inputs_inv.input_ids, attention_mask=inputs_inv.attention_mask)
# We flip the inverse encoder output to have the same order
print((enc_output.last_hidden_state == enc_output_inv.last_hidden_state.flip(0)).all())
```
Also equals `False`
### Expected behavior
```shell
I expect the order of the batch to not have an influence on the model output (aside from the ordering of the output)
```
**Edit** The issue persistent regardless of padding and also causes model.generate to generate two different sets of text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17455/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17454/events
|
https://github.com/huggingface/transformers/issues/17454
| 1,250,544,970
|
I_kwDOCUB6oc5Kic1K
| 17,454
|
XLM-Roberta offset mapping is off by one in case of whitespace-subwords
|
{
"login": "robvanderg",
"id": 6604037,
"node_id": "MDQ6VXNlcjY2MDQwMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6604037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robvanderg",
"html_url": "https://github.com/robvanderg",
"followers_url": "https://api.github.com/users/robvanderg/followers",
"following_url": "https://api.github.com/users/robvanderg/following{/other_user}",
"gists_url": "https://api.github.com/users/robvanderg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robvanderg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robvanderg/subscriptions",
"organizations_url": "https://api.github.com/users/robvanderg/orgs",
"repos_url": "https://api.github.com/users/robvanderg/repos",
"events_url": "https://api.github.com/users/robvanderg/events{/privacy}",
"received_events_url": "https://api.github.com/users/robvanderg/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I think @robvanderg is right (with my intuitive understanding). Since `return_offsets_mapping` is only for fast tokenizer, let me kindly tag @Narsil here\r\n\r\nThe results should be like\r\n`... (18, 19), (19, 29) ...`\r\nI think.",
"I agree with you @ydshieh, to solve it it might be best to open an issue on the [tokenizers](https://github.com/huggingface/tokenizers) library as this is where the offsets are computed :blush: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.0-94-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large', use_fast=True)
>>> tokenizer.tokenize('Quality of work is sufficient')
['▁Quality', '▁of', '▁work', '▁is', '▁', 'sufficient']
>>> tokenizer.encode_plus('Quality of work is sufficient', return_offsets_mapping=True)
{'input_ids': [0, 124604, 111, 4488, 83, 6, 129980, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 7), (8, 10), (11, 15), (16, 18), (19, 20), (19, 29), (0, 0)]}
### Expected behavior
```shell
The third-last offset-tuple (19,20) overlaps with the second-last offset-tuple(19,29). I believe that this should be (18,19), and thus refer to the whitespace.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17454/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17453/events
|
https://github.com/huggingface/transformers/issues/17453
| 1,250,523,097
|
I_kwDOCUB6oc5KiXfZ
| 17,453
|
gpt-neo-1-3B large memory increase during training even with a small training dataset
|
{
"login": "sandeeppagey",
"id": 73466856,
"node_id": "MDQ6VXNlcjczNDY2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/73466856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandeeppagey",
"html_url": "https://github.com/sandeeppagey",
"followers_url": "https://api.github.com/users/sandeeppagey/followers",
"following_url": "https://api.github.com/users/sandeeppagey/following{/other_user}",
"gists_url": "https://api.github.com/users/sandeeppagey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sandeeppagey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sandeeppagey/subscriptions",
"organizations_url": "https://api.github.com/users/sandeeppagey/orgs",
"repos_url": "https://api.github.com/users/sandeeppagey/repos",
"events_url": "https://api.github.com/users/sandeeppagey/events{/privacy}",
"received_events_url": "https://api.github.com/users/sandeeppagey/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @sandeeppagey !\r\n\r\nCould you post a simple code-snippet here, with more details such as the GPU memory, the `dtype` for training. \r\n\r\nAlso note that, the model weights in fp32 are about 5GB and Adam takes 3x more memory, so this alone is about ~20GB and then there's more memory to store the activations and gradients. ",
"Hi @patil-suraj \r\n\r\nThanks for the explanation. I am not running on GPU. I am running\r\non CPU just to check the memory increase. I have not changed the dtype so I guess it is fp32.\r\nI agree with your calculations that the model training will require about 20G of memory. Please close\r\nthe issue.\r\nThe relevant code snippet is as follows. \r\n\r\nmodel = GPTNeoForCausalLM.from_pretrained('/home1/pretrained_models/gpt_neo_1_3B')\r\n\r\n optimizer = AdamW(model.parameters(), lr=2e-5)\r\n scheduler = get_linear_schedule_with_warmup(\r\n optimizer, num_warmup_steps=200, num_training_steps=-1\r\n )\r\n\r\n train_dataloader = DataLoader(dataset, batch_size=1, shuffle=True)\r\n loss=0\r\n accumulating_batch_count = 0\r\n input_tensor = None\r\n\r\n for epoch in range(1):\r\n\r\n print(f\"Training epoch {epoch}\")\r\n print(f\"Loss before: {loss}\")\r\n for idx, entry in tqdm(enumerate(train_dataloader)):\r\n input_tensor = entry #tokenization has already been done\r\n\r\n outputs = model(input_tensor, labels=input_tensor)\r\n #print(isinstance(outputs, dict))\r\n #print(type(outputs))\r\n loss = outputs[0]\r\n loss.backward()\r\n \r\n optimizer.step()\r\n scheduler.step()\r\n optimizer.zero_grad()\r\n model.zero_grad()"
] | 1,653
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.0-1075-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
```
### Who can help?
@patil-suraj
Training the gpt-neo-1-3B model with a training dataset of 100 sentences (about 75 words max for each sentence, say about 100 tokens), there is jump of close to 29GB in memory consumption during the training. Tried batch sizes of 8/4/2 showing similar increase. Is that expected, or does this memory increase needs further investigation? Model is loaded without setting any of return_dict, output_attentions or output_hidden_states to True.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Simple training loop as shown in code example in
https://towardsdatascience.com/how-to-fine-tune-gpt-2-for-text-generation-ae2ea53bc272
Use gpt-neo-1-3b and 100 entries for training.
Doing training for 1 epoch itself and the memory jumps by about 29G.
%memit model = train(dataset, model, tokenizer, epochs=1, gpt2_type="gpt_neo_1_3B")
peak memory: 36634.16 MiB, increment: 29428.80 MiB
### Expected behavior
```shell
Not sure if there is some issue with code or this kind of memory jump is expected.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17453/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17452/events
|
https://github.com/huggingface/transformers/issues/17452
| 1,250,516,645
|
I_kwDOCUB6oc5KiV6l
| 17,452
|
NaN in GPT NeoX model (generation)
|
{
"login": "thies1006",
"id": 32954413,
"node_id": "MDQ6VXNlcjMyOTU0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/32954413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thies1006",
"html_url": "https://github.com/thies1006",
"followers_url": "https://api.github.com/users/thies1006/followers",
"following_url": "https://api.github.com/users/thies1006/following{/other_user}",
"gists_url": "https://api.github.com/users/thies1006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thies1006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thies1006/subscriptions",
"organizations_url": "https://api.github.com/users/thies1006/orgs",
"repos_url": "https://api.github.com/users/thies1006/repos",
"events_url": "https://api.github.com/users/thies1006/events{/privacy}",
"received_events_url": "https://api.github.com/users/thies1006/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @zphang do you have an idea of what might be happening there?\r\n\r\nAlso happening here: https://github.com/huggingface/accelerate/issues/404",
"Also cc @sgugger as it's leveraging the auto map.",
"I don't think it comes from the auto map, some weights are Nan and so an error is raised [here](https://github.com/huggingface/transformers/blob/5af38953bb05fe722c2ec5c345f54c2712ce4573/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L208). (I did say that `RuntimeError` with no messages were not ideal @zphang ;-) )",
"Seems to be a float16 overflow problem after applying `torch.einsum`, fixable with \r\n\r\n```\r\n attn_scores = torch.einsum(\"bik,bjk->bij\", query, key) / self.norm_factor\r\n+ finfo = torch.finfo(attn_scores.dtype)\r\n+ attn_scores = attn_scores.clamp(finfo.min, finfo.max)\r\n``` \r\nAlthough after fixing this NaN problem, the generation is still not working correctly.\r\n",
"Thanks @zomux, this works. I tried something similar (casting to FP32 and back) which gave the same error:\r\n```\r\nTraceback (most recent call last):\r\n File \"tt.py\", line 24, in <module>\r\n output = model.generate(input_tokenized[\"input_ids\"].to(0), do_sample=True)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/generation_utils.py\", line 1317, in generate\r\n return self.sample(\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/generation_utils.py\", line 1937, in sample\r\n outputs = self(\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1112, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/accelerate/hooks.py\", line 150, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 602, in forward\r\n outputs = self.gpt_neox(\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1112, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 493, in forward\r\n outputs = layer(\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1112, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/accelerate/hooks.py\", line 150, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 299, in forward\r\n attention_layer_outputs = self.attention(\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1112, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 150, in forward\r\n attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)\r\n File \"/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py\", line 218, in _attn\r\n attn_output = torch.matmul(attn_weights, value)\r\nRuntimeError: Expected size for first two dimensions of batch2 tensor to be: [64, 5] but got: [64, 1]\r\n```\r\n\r\nI figured out that when setting use_cache=False generation runs without errors.\r\n\r\nChange:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L642\r\n```\r\n- return {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"past_key_values\": past}\r\n+ return {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"past_key_values\": past, \"use_cache\": False}\r\n```\r\n\r\n",
"> I don't think it comes from the auto map, some weights are Nan and so an error is raised [here](https://github.com/huggingface/transformers/blob/5af38953bb05fe722c2ec5c345f54c2712ce4573/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L208). (I did say that `RuntimeError` with no messages were not ideal @zphang ;-) )\r\n\r\nAh, I thought I'd removed all the RuntimeErrors, I've submitted a PR here: https://github.com/huggingface/transformers/pull/17563\r\n\r\nAs for the NaN-ing, I've not figured that out either. I found (in very ad-hoc testing) that the einsum appears to be more stable than [the original approach](https://github.com/zphang/minimal-gpt-neox-20b/blob/1d485409c0c108d1c03831cb2498040a769e8460/minimal20b/model.py#L226-L232), but it looks like it hasn't fully solved the issue.",
"Hello @zphang,\r\nthe problem is that when the scaling factor is applied, the overflow has already happened. \r\nTherefore I think the `self.norm_factor` should go into the matrix multiply (scale first and do the matrix multiply second):\r\n```\r\n- attn_scores = torch.einsum(\"bik,bjk->bij\", query, key) / self.norm_factor\r\n+ attn_scores = torch.einsum(\"bik,bjk->bij\", query / self.norm_factor, key)\r\n```\r\nIt seems to work fine: \r\n\r\n```\r\nHuggingface is a fast-growing Chinese company that is the leader in deep learning and natural language processing.\r\n\r\nThe company raised $75million from Tencent and Baidu.\r\n\r\nDeep Voice is a Chinese startup that enables users to control various home appliances using simple commands issued with their voice alone.\r\n```",
"@zphang Is there a clear release version where the issue is resolved?\r\nI got `RuntimeError: probability tensor contains either `inf`, `nan` or element < 0` while deploying the gpt-neox model with half.\r\nI can deploy it completely in full preciosion.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-4.15.0-140-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik (NeoX)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Script to run:
```
import torch
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
from accelerate import init_empty_weights, infer_auto_device_map, load_checkpoint_and_dispatch
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModelForSeq2SeqLM
weights_path = "EleutherAI/gpt-neox-20b"
config = AutoConfig.from_pretrained(weights_path)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
tokenizer = AutoTokenizer.from_pretrained(weights_path)
device_map = infer_auto_device_map(model, no_split_module_classes=["GPTNeoXLayer"])
load_checkpoint_and_dispatch(
model,
weights_path,
device_map=device_map,
offload_folder=None,
offload_state_dict=True
)
prompt = 'Huggingface is'
input_tokenized = tokenizer(prompt, return_tensors="pt")
output = model.generate(input_tokenized["input_ids"].to(0), do_sample=True)
output_text = tokenizer.decode(output[0].tolist())
```
Script is crashing with the traceback:
```
Traceback (most recent call last):
File "run.py", line 24, in <module>
output = model.generate(input_tokenized["input_ids"].to(0), do_sample=True)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/generation_utils.py", line 1316, in generate
return self.sample(
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/generation_utils.py", line 1934, in sample
outputs = self(
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 596, in forward
outputs = self.gpt_neox(
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 488, in forward
outputs = layer(
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/accelerate/hooks.py", line 148, in new_forward
output = old_forward(*args, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 296, in forward
attention_layer_outputs = self.attention(
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 148, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/secondary/thies/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 208, in _attn
raise RuntimeError()
RuntimeError
```
The problem was also mentioned here:
https://github.com/huggingface/transformers/issues/15642#issuecomment-1133067212
https://github.com/huggingface/transformers/issues/15642#issuecomment-1133828254
The problem seems to be that `torch.einsum` returns `inf` (fp16 overflow) which leads to `nan` when calculating the softmax.
### Expected behavior
```shell
Code should run.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17452/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17451/events
|
https://github.com/huggingface/transformers/issues/17451
| 1,250,496,524
|
I_kwDOCUB6oc5KiRAM
| 17,451
|
May i just train a translation task with T5 from scratch without pretrain a language model?
|
{
"login": "520jefferson",
"id": 5691554,
"node_id": "MDQ6VXNlcjU2OTE1NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5691554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/520jefferson",
"html_url": "https://github.com/520jefferson",
"followers_url": "https://api.github.com/users/520jefferson/followers",
"following_url": "https://api.github.com/users/520jefferson/following{/other_user}",
"gists_url": "https://api.github.com/users/520jefferson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/520jefferson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/520jefferson/subscriptions",
"organizations_url": "https://api.github.com/users/520jefferson/orgs",
"repos_url": "https://api.github.com/users/520jefferson/repos",
"events_url": "https://api.github.com/users/520jefferson/events{/privacy}",
"received_events_url": "https://api.github.com/users/520jefferson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @520jefferson,\r\n\r\nI understand your question as whether it's possible to fine-tune T5 from scratch and to **not** use a pretrained checkpoint.\r\n\r\nYes, this should definitely be possible, but I wouldn't really recommend it given the power of transfer learning. \r\n\r\nHere also some very nice explanation by @sgugger on how powerful transfer learning is that might be interesting: https://huggingface.co/course/chapter1/4?fw=pt#transfer-learning\r\n\r\nWhy not fine-tune a pretrained T5 model on translation?",
"Hey @patrickvonplaten \r\n\r\nI want to distill a big model (pytorch version) to t5 model (considering the FasterTransformer Backend\r\nhttps://github.com/triton-inference-server/fastertransformer_backend has provide origin T5 (not t5.1) triton backend optimization , this reasoning optimizaiton will be conducive to carrying more online traffic) , why i don't use transformer as the student model because i haven't find the pytorch version transformer with reasoning optimization and combining with triton. \r\n \r\nAnd the big model use bpe not sentencepiece, So the tokenizer should be load the bpe codes and the vocabs is differenct from the origin t5 model. Therefore i want to distill the big model to t5 model and use the vocab in the same time. \r\n\r\nSo I need to figure out two things:\r\n1, whether the t5 can be train in dialogue without pretrain which treat the t5 like transformer without pretrain and i haven't find a relate case finetune from scratch.\r\n2, how to set the tokenizer to just use bpe codes?\r\n\r\n",
"Sorry I'm a bit lost here @520jefferson, \r\n\r\nI don't fully understand what you want to do here, but I guess the target task is distillation? Should we maybe try to get help on the forum: https://discuss.huggingface.co/ for distillation? ",
"@patrickvonplaten \r\nI just need to finetune t5 from scratch without pretrain, and the tokenizer can just load vocab.txt (not json) or merges.txt (bpe codes).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"the tokenizer can be built by hand."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### Feature request
May i use bpe in preprocessing and train a translation model from scatch without pretrain a language model?
@patrickvonplaten
### Motivation
I want to distill a big model to T5 model and the T5 vocab should be the same as big model.
### Your contribution
I can verify the process.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17451/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17450/events
|
https://github.com/huggingface/transformers/issues/17450
| 1,250,446,370
|
I_kwDOCUB6oc5KiEwi
| 17,450
|
attention_mask hold float values in [0,1] in T5
|
{
"login": "pretidav",
"id": 23082930,
"node_id": "MDQ6VXNlcjIzMDgyOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/23082930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pretidav",
"html_url": "https://github.com/pretidav",
"followers_url": "https://api.github.com/users/pretidav/followers",
"following_url": "https://api.github.com/users/pretidav/following{/other_user}",
"gists_url": "https://api.github.com/users/pretidav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pretidav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pretidav/subscriptions",
"organizations_url": "https://api.github.com/users/pretidav/orgs",
"repos_url": "https://api.github.com/users/pretidav/repos",
"events_url": "https://api.github.com/users/pretidav/events{/privacy}",
"received_events_url": "https://api.github.com/users/pretidav/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ydshieh that has given some thought to this in the past, I believe",
"Well, as far as I can recall, I didn't have this idea before. This looks like @pretidav want to provide manual (float) attention mask.\r\n\r\n- with the introduction of `dtype.min` in #17306 (or even with `-1e4` currently) as the large negative value for masking value, `0` will be replaced by `dtype.min`, but how about `0.5`? Use something like `0.5 * dtype.min` doesn't really make sense: if there is also a `1` in the (original) attention mask which is replaced by `0`, then `0.5 * dtype.min` won't get any attention, just like `dtype.min`. Use more complex transformation won't be good (potentially) for performance reason.\r\n- The number of places to be changed would be quite large (if we decide to do it)\r\n- In order to make the decision, I think it would be great if we can see this usage is proposed in some papers, or in some real world examples, that show the improvements (if we want to do it correctly)\r\n- Finally, I am not sure how a user would provide meaningful float attention mask values. Just using some heuristic to get hard-coded values in advance? I kinda feel that these soft values is the role of the attention probability through training.\r\n\r\ncc @patrickvonplaten @patil-suraj @sgugger \r\n\r\n",
"Thanks for the answer @ydshieh !\r\nFloat attention values could be provided from another model or a human annotator. For instance, a human annotator can try to \"force\" the model to pay more attention to some part of the input document, or some specific entities. It's just a way to introduce some external control over the model attention other then its own attention probability obtained during the training.\r\n\r\nIs there a way currently such a feature could be used in T5? \r\nIf I understand well, attn_mask is not the right feature to play with. ",
"@pretidav There is currently no such feature. As mentioned previously, attention mask is not good for this purpose, as the range of the **final (processed)** attention mask is `[-inf, 0]`. The `-inf` means no attention at all (masked), and `0` means no mask (so attend to it). \r\n\r\nThe attention mask is combined with the attention scores (computed using `query` and `keys`), whose range is **NOT** a fixed interval.\r\n\r\nSo from these facts, there is no clear & meaningful values to provide extra attention values (before `softmax`), as far as I can tell.\r\n\r\nA **manual** change, if one want to do it, could be done after the following line of attention probability computation (which has values between 0 and 1).\r\nhttps://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/t5/modeling_t5.py#L534-L536\r\n\r\nHowever, it depends on how you would like to combine `attn_weights` with extra attention values.",
"Agree with what @ydshieh said!\r\n\r\nThis is an interesting question but to me it seems like a niche use-case and as @ydshieh said, this would require a lot of changes. IMO for this use-case one can just copy-paste the T5 model and tweak it to support this. The models in `Transformers` are designed in such a way that users could just take the modeling files and tweak it for their purpose rather than supporting everything in the framework. Thanks :) ",
"Thank you very much for all the answers! I'll try to tweak the model. "
] | 1,653
| 1,654
| 1,654
|
NONE
| null |
### Feature request
Hello Everybody,
I was wondering whether attention_mask input for T5 could be used as a float in [0,1] instead of an integer as in the documentation
`attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked. `
Would you think passing something like (I write tokens for clarity, but they would be tokens_ids)
`tokens = ['hello','how','are','you','pad','pad']
attention_mask = [0.5, 0.9, 0.2, 1, 0, 0] `
to attribute a particular emphasis on several tokens respect to the others (still keeping components 0 for Pad tokens) ?
Clearly the model requires a finetuning, but I wonder if this different usage of attention_mask could harm it in some way…
As far as I see from [transformers/modeling_t5.py at main · huggingface/transformers · GitHub](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py)
It is not really clear to me how the attention_masks act in the attention blocks (see here for instance [transformers/modeling_t5.py at 8f46ac98498dd47701971064617e00d7e723a98e · huggingface/transformers · GitHub](https://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/t5/modeling_t5.py#L531)).
I was expecting some kind of “hard” attention, but as far as I see it’s a “soft” implementation shifting the position_bias. How this translates into the removal of ‘pad’ token contribution from the attention (is a shift of “1” as in the original attention_mask, enough to ensure a reasonable suppression of pads) ?
Any answer is very welcomed! Thank you!
### Motivation
This not-boolean usage of mask could offer important improvements over the model.
### Your contribution
I am not an expert, so before doing any development I would like to hear about the feasibility (or if it is already supported) from developers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17450/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17450/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17449/events
|
https://github.com/huggingface/transformers/pull/17449
| 1,250,427,672
|
PR_kwDOCUB6oc44kMV-
| 17,449
|
Improve notrainer examples
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
1. In no-trainer examples, `train_loss` being logged wasn't normalized and as such wasn't intuitive to understand. This also made it difficult to compare train loss between different tools such as comparing train loss from Trainer with that of Accelerate. This PR normalizes the train_loss per epoch to make is more intuitive and comparable.
2. Replaces HF AdamW with torch AdamW for NLP no-trainer examples. This prevents corresponding warning being displayed.
3. Fixing no-trainer examples so that tracker run is created only for the main process else wandb will create num_processes runs with no data.
4. converting `train_loss` from tensor to float so that it gets logged in tensorboard tracker
5. Fixing `run_ner_no_trainer.py` to correctly log `train_loss` in `all_results.json`
6. Adding `report_to` arg to enable users to specify preferred tracker instead of all available trackers which is default option. This prevents logging to the trackers that user doesn't want.
7. In many no-trainer NLP tasks one can train model from scratch, this means that user can bypass `model_name_or_path` arg. However, it is set as required for all scripts which throws error when it isn't specified. Setting this arg `required=False` in corresponding examples to resolve the error when training from scratch.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17449/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17449",
"html_url": "https://github.com/huggingface/transformers/pull/17449",
"diff_url": "https://github.com/huggingface/transformers/pull/17449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17449.patch",
"merged_at": 1653676591000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17448/events
|
https://github.com/huggingface/transformers/issues/17448
| 1,250,422,454
|
I_kwDOCUB6oc5Kh-62
| 17,448
|
ImportError: cannot import name 'OptionalDependencyNotAvailable' from 'transformers.utils'
|
{
"login": "chengyjonathan",
"id": 37084761,
"node_id": "MDQ6VXNlcjM3MDg0NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/37084761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chengyjonathan",
"html_url": "https://github.com/chengyjonathan",
"followers_url": "https://api.github.com/users/chengyjonathan/followers",
"following_url": "https://api.github.com/users/chengyjonathan/following{/other_user}",
"gists_url": "https://api.github.com/users/chengyjonathan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chengyjonathan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chengyjonathan/subscriptions",
"organizations_url": "https://api.github.com/users/chengyjonathan/orgs",
"repos_url": "https://api.github.com/users/chengyjonathan/repos",
"events_url": "https://api.github.com/users/chengyjonathan/events{/privacy}",
"received_events_url": "https://api.github.com/users/chengyjonathan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,653
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
Cannot import any models from transformers
# Name Version Build Channel
aiohttp 3.8.1 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
attrs 21.4.0 pypi_0 pypi
bzip2 1.0.8 he774522_0
ca-certificates 2022.4.26 haa95532_0
certifi 2022.5.18.1 py310haa95532_0
charset-normalizer 2.0.12 pypi_0 pypi
colorama 0.4.4 pypi_0 pypi
cudatoolkit 11.3.1 h59b6b97_2
datasets 2.2.2 pypi_0 pypi
dill 0.3.4 pypi_0 pypi
filelock 3.7.0 pypi_0 pypi
frozenlist 1.3.0 pypi_0 pypi
fsspec 2022.5.0 pypi_0 pypi
huggingface-hub 0.7.0 pypi_0 pypi
idna 3.3 pypi_0 pypi
libffi 3.4.2 h604cdb4_1
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.12.2 pypi_0 pypi
numpy 1.22.4 pypi_0 pypi
openssl 1.1.1o h2bbff1b_0
packaging 21.3 pypi_0 pypi
pandas 1.4.2 pypi_0 pypi
pip 21.2.4 py310haa95532_0
pyarrow 8.0.0 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
python 3.10.4 hbb2ffb3_0
python-dateutil 2.8.2 pypi_0 pypi
pytz 2022.1 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
regex 2022.4.24 pypi_0 pypi
requests 2.27.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
sentencepiece 0.1.96 pypi_0 pypi
setuptools 61.2.0 py310haa95532_0
six 1.16.0 pypi_0 pypi
sqlite 3.38.3 h2bbff1b_0
tk 8.6.11 h2bbff1b_1
tokenizers 0.12.1 pypi_0 pypi
torch 1.11.0 pypi_0 pypi
tqdm 4.64.0 pypi_0 pypi
transformers 4.19.2 pypi_0 pypi
typing-extensions 4.2.0 pypi_0 pypi
tzdata 2022a hda174b7_0
urllib3 1.26.9 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wheel 0.37.1 pyhd3eb1b0_0
wincertstore 0.2 py310haa95532_2
xxhash 3.0.0 pypi_0 pypi
xz 5.2.5 h8cc25b3_1
yarl 1.7.2 pypi_0 pypi
zlib 1.2.12 h8cc25b3_2
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import AutoTokenizer
### Expected behavior
```shell
Able to import without error.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17448/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17447/events
|
https://github.com/huggingface/transformers/pull/17447
| 1,250,231,419
|
PR_kwDOCUB6oc44jjyr
| 17,447
|
More informative error message for DataCollatorForSeq2Seq
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@CakeCrusher, if it's programmable - won't it be better to actually validate the input shape explicitly, and assert if it's wrong - instead of piling up possible errors to an already long error message?",
"@stas00 agreed ill make a check for it",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stas00 , is the pr ok?",
"This PR is waiting for your answer here: https://github.com/huggingface/transformers/pull/17447#discussion_r883964302\r\n",
"@CakeCrusher, I think we lost each other here. Should we finish this PR?",
"@sgugger @stas00\r\n\r\nHi @stas00 sorry for the discontinuity, I am now able to focus and see this issue through.\r\n\r\nHere is an example demonstrating successful and erring inputs:\r\nhttps://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing\r\n\r\nI then made the following changes to overcome excessive nesting (a list containing a single item):\r\nhttps://github.com/CakeCrusher/transformers/compare/main...lead_nesting_solution\r\n\r\nI understand the changes are pretty fundamental, but they work. I have yet to add the assert statement, since the nesting fix does the job forcefully. I was hoping to do an overarching PR, involving the new error message (or assert) and the fix (possibly parametrized so that it is not forced). What are your thoughts?",
"This is an interesting idea, but I'm concerned it might be (1) not backward compatible (2) I think it's best for the user to apply this function themselves. Perhaps if it's a useful util function we can provide it and assert with a message to use it instead?\r\n\r\nAnd to remind my initial suggestion was:\r\n\r\n- check if shape is wrong and raise a specific assert if it is wrong (with possible hints at how to fix it)\r\n\r\ne.g. the inputs shape is wrong, expecting a, but got b....\r\n\r\nwon't that be a clean solution?\r\n\r\nwe can then discuss with others if they feel your proposed util function would be a good match to add.",
"@stas00 \r\n\r\n> Perhaps if it's a useful util function we can provide it and assert with a message to use it instead?\r\n\r\nThat is an excellent idea.\r\n\r\nI will have it ready early next week with a test.\r\n\r\nDo you recommend I make a new PR for it or merge it to this one? ",
"Hi @stas00 , I submitted a [new PR](https://github.com/huggingface/transformers/pull/18119) for the aforementioned fixes. I have yet to add the test and proper docs. As for what I have so far please let me know what you think.\r\n\r\n\r\n(My git tree was a mess, so that was largely why it's a new PR sorry about that.)",
"Apologies for taking a long time to follow up, @CakeCrusher \r\n\r\nAs I suggested in the first place I think your suggestion to assert on invalid input nesting is great.\r\n\r\nI see you tried to move the helper util to `datasets` and it's not being welcomed there, as it's really a user's responsibility to prepare the data correctly.\r\n\r\nPerhaps we just stick to the assert part and trust the user to figure out how to fix it?\r\n\r\n@sgugger, are you ok with the assertion part of this PR on the deeply nested input? I'd guess that you too might be against the 2nd part of adding a helper util to remove excessive nesting as it's not generic enough.",
"No worries @stas00,\r\nYeah.. I understand if I have to give up on introducing the helper function on this PR. I'll see what what [lhoestq](https://github.com/lhoestq) ends up thinking about the datasets alternative.\r\n\r\nIn the meantime, I'll keep the assert independent. And maybe open a new PR for the helper function.",
"I must admit I do not understand what the problem is, since the notebook linked executes without any issue.",
"Sorry about that @sgugger the notebook was organized in a weird way. Now [the notebook](https://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) will raise the error.",
"I see. I've pointed out in #18119 where that error message should be updated."
] | 1,653
| 1,658
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
I ran into an error related to an incorrect shape of inputs when using `DataCollatorForSeq2Seq`. I learned that it had to do with the `BatchEncoding` class. I did not find the error message particularly helpful as it does not mention anything about the input shape. Therefore I added the extra line on the error message to help guide anyone else who runs into this error.
Fixes #15505
@stas00
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17447/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17447",
"html_url": "https://github.com/huggingface/transformers/pull/17447",
"diff_url": "https://github.com/huggingface/transformers/pull/17447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17447.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17446/events
|
https://github.com/huggingface/transformers/issues/17446
| 1,250,201,508
|
I_kwDOCUB6oc5KhI-k
| 17,446
|
Train Transformer XL language modeling with padding
|
{
"login": "StefanHeng",
"id": 43276957,
"node_id": "MDQ6VXNlcjQzMjc2OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43276957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StefanHeng",
"html_url": "https://github.com/StefanHeng",
"followers_url": "https://api.github.com/users/StefanHeng/followers",
"following_url": "https://api.github.com/users/StefanHeng/following{/other_user}",
"gists_url": "https://api.github.com/users/StefanHeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StefanHeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StefanHeng/subscriptions",
"organizations_url": "https://api.github.com/users/StefanHeng/orgs",
"repos_url": "https://api.github.com/users/StefanHeng/repos",
"events_url": "https://api.github.com/users/StefanHeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/StefanHeng/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### System Info
```shell
`transformers` version `4.19.2`, `python` version `3.9.12`
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
I'm using `DataCollatorForLanguageModeling` **and padding** to train Transformer XL with LM head from scratch.
I'm not sure if the error is intended, i.e. do HuggingFace support training with padding tokens?
I'm not sure because I had an error with padding, but the forward pass for LM head seems to consider padding?
```python
if labels is not None:
losses = softmax_output.view(bsz, tgt_len - 1)
# Avoids from incorporating padding (-100) tokens into loss value
loss = losses[losses != 0].mean()
```
I don't find much support for this. I checked [this issue](https://github.com/huggingface/transformers/issues/586).
And I don't think it's relevant in the notebook [language_modeling_from_scratch](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb) since all texts are concatenated.
Also, I'm using `n_clusters` == 0.
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
Traceback (most recent call last):
File "/Users/stefanhg/Documents/UMich/Research/Music with NLP/Symbolic-Music-Generation/musicnlp/trainer/train.py", line 407, in <module>
train_xl()
File "/Users/stefanhg/Documents/UMich/Research/Music with NLP/Symbolic-Music-Generation/musicnlp/trainer/train.py", line 406, in train_xl
trainer.train(ignore_keys_for_eval=ignore_keys_for_eval)
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/transformers/trainer.py", line 1317, in train
return inner_training_loop(
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/transformers/trainer.py", line 1554, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/transformers/trainer.py", line 2183, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/stefanhg/Documents/UMich/Research/Music with NLP/Symbolic-Music-Generation/musicnlp/util/train/train_util_wrap.py", line 81, in compute_loss
outputs = model(**inputs)
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/stefanhg/Documents/UMich/Research/Music with NLP/Symbolic-Music-Generation/musicnlp/models/transformer_xl.py", line 189, in forward
softmax_output = self.crit(pred_hid, labels)
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/stefanhg/opt/anaconda3/envs/music-nlp/lib/python3.9/site-packages/transformers/models/transfo_xl/modeling_transfo_xl_utilities.py", line 112, in forward
out = -nn.functional.log_softmax(logit, dim=-1).gather(1, labels.unsqueeze(1)).squeeze(1)
RuntimeError: index -100 is out of bounds for dimension 1 with size 462
```
Here's my stack trace.
### Expected behavior
```shell
I hope the `ProjectedAdaptiveLogSoftmax` implementation can consider support ignoring padding tokens.
```
Looks like just filtering out the labels in adaptive softmax fixes it (and for this I need to ignore reshaping the `losses` in LM head forward
```python
if self.n_clusters == 0:
logit = self._compute_logit(hidden, self.out_layers[0].weight, self.out_layers[0].bias, self.out_projs[0])
if labels is not None:
# ========================== Begin of modified ==========================
out = -nn.functional.log_softmax(logit, dim=-1).gather(1, labels[labels != -100].unsqueeze(1)).squeeze(1)
# ========================== End of modified ==========================
else:
out = nn.functional.log_softmax(logit, dim=-1)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17446/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17444/events
|
https://github.com/huggingface/transformers/pull/17444
| 1,249,909,452
|
PR_kwDOCUB6oc44iegV
| 17,444
|
[WIP] Warning when passing padded input ids but no attention mask
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17444). All of your documentation changes will be reflected on that endpoint.",
"Relevant issues:\r\nhttps://github.com/huggingface/transformers/issues/4083\r\nhttps://github.com/huggingface/transformers/issues/278\r\nhttps://github.com/huggingface/transformers/issues/16136",
"I think the way you implemented it is clean and adds nice warnings. I agree with the idea behind it, and the better warnings we send, the better the models will perform for users.\r\n\r\nI think handling it like it is done here based off of configuration attribute is not going to work very well across models, however. I feel like having the method be configurable by passing optional bos/eos tokens would likely make the method more versatile to the models which do not conform to the default approach.",
"> I think handling it like it is done here based off of configuration attribute is not going to work very well across models, however. I feel like having the method be configurable by passing optional bos/eos tokens would likely make the method more versatile to the models which do not conform to the default approach.\r\n\r\nHmm, don't really agree here. Note that `pad_token_id`, `bos_token_id`, `eos_token_id`, `sep_token_id` **must** be present in every model's config since it's in `configuration_utils.py`. \r\nAlso we never pass any of the above attributes through the forward method, so one would only ever pass `self.config.pad_token_id` to the method. Wdyt @LysandreJik ? Also very curious to hear @sgugger's opinion here",
"Sounds good, I'm likely worrying for nothing then. Good for me like this, very easy to add kwargs afterwards anyway!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I think this would be an impactful addition! @ydshieh, would you be interested in continuing this PR?",
"> I think this would be an impactful addition! @ydshieh, would you be interested in continuing this PR?\r\n\r\nSure. I will take a look and see if there is anything blocking. ",
"You can search `elif input_ids is not None:` that is in the base model classes like `BertModel` (already done by @patrickvonplaten), `GPT2Model` etc.\r\n\r\nYou don't need to replace all of them - it would be super nice already for a few of the most used modes 🚀 Thank you!"
] | 1,653
| 1,702
| 1,702
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
One of the most common mistake users make in Transformers IMO is that `input_ids` are padded, but no `attention_mask` is provided (we see many examples of this). As discussed multiple times, we **don't** want to infer the `attention_mask` automatically as this creates a lot of unmaintainable, "not-possible-to-deal-with" complexity.
A while ago, we discussed to throw a warning in this case, making sure it's done only once to not spam the user when calling the model multiple times. I'm not sure we found a good conclusion, but IMO it's important that we warn the user as too users (IMO) think the attention_mask is inferred from the padding tokens. This PR is tries to solve this and shows how it'd be implemented for just BERT. We would have to implement it for all other models then as well. Would very much like to hear your opinion here @sgugger @LysandreJik @patil-suraj . Note that this PR will touch a lot of important functions / files, so it'd be very important to make the warning as clear as possible.
I do however have a strong conviction that we should display such a warning.
No the warning function can display the following warning messages for a toy BERT example of passing just three input ids.
Possible warning messages:
1. Pad token present, no attention mask, eos, bos, sep all different from pad (that's **VERY** likely an error IMO):
**Displayed warning:**
```
The input IDs tensor([[0, 1, 1]]) contains the `pad_token_id` 0, but NO `attention_mask` is passed.
Padding the input IDs without passing an `attention_mask` leads to unexpected, possibly incorrect outputs.
```
2. Pad token present, no attention mask, eos or bos or sep same as pad:
**Displayed warning:**
```
The input IDs tensor([[0, 1, 1]]) contains the `pad_token_id` 0, but NO `attention_mask` is passed.
We strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the attention weights.
You can ignore this warning, if your `pad_token_id` 0 is identical to your `sep_token_id` 0 AND your input is NOT padded.
```
3. Pad token present, no attention mask, two or more of eos, bos, sep identical to pad (don't think this exists actually):
**Displayed warning:**
```
The input IDs tensor([[0, 1, 1]]) contains the `pad_token_id` 0, but NO `attention_mask` is passed.
We strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the attention weights.
You can ignore this warning, if your `pad_token_id` 0 is identical to your `bos_token_id` 0 AND your input is NOT padded.
We strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the attention weights.
You can ignore this warning, if your `pad_token_id` 0 is identical to your `sep_token_id` 0 AND your input is NOT padded.
```
4. Otherwise no warning.
Also note that the warning only appears at the first forward call.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17444/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17444/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17444",
"html_url": "https://github.com/huggingface/transformers/pull/17444",
"diff_url": "https://github.com/huggingface/transformers/pull/17444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17444.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17443/events
|
https://github.com/huggingface/transformers/pull/17443
| 1,249,849,193
|
PR_kwDOCUB6oc44iSD7
| 17,443
|
Add CodeGen model
|
{
"login": "rooa",
"id": 2957582,
"node_id": "MDQ6VXNlcjI5NTc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2957582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rooa",
"html_url": "https://github.com/rooa",
"followers_url": "https://api.github.com/users/rooa/followers",
"following_url": "https://api.github.com/users/rooa/following{/other_user}",
"gists_url": "https://api.github.com/users/rooa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rooa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rooa/subscriptions",
"organizations_url": "https://api.github.com/users/rooa/orgs",
"repos_url": "https://api.github.com/users/rooa/repos",
"events_url": "https://api.github.com/users/rooa/events{/privacy}",
"received_events_url": "https://api.github.com/users/rooa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @patil-suraj @rooa you should go fetch upstream on [your fork](https://github.com/rooa/transformers/tree/add_codegen). There were some test fixes that I think you are missing which is causing the red exes that no one likes to see. I actually would love to use this but I can't because this PR is not merged yet!",
"Merging now! Thanks a lot @rooa for working on this and being patient with the review and tests."
] | 1,653
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds [CodeGen](https://github.com/salesforce/codegen) PyTorch model.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? ==> Discussed with @lvwerra and @patil-suraj.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@lvwerra @patil-suraj @loubnabnl
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17443/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17443/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17443",
"html_url": "https://github.com/huggingface/transformers/pull/17443",
"diff_url": "https://github.com/huggingface/transformers/pull/17443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17443.patch",
"merged_at": 1656083438000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17442/events
|
https://github.com/huggingface/transformers/pull/17442
| 1,249,747,698
|
PR_kwDOCUB6oc44h8Zc
| 17,442
|
[Generate] Greedy Search, fix output scores from logits to scores
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patil-suraj Will there be any support for raw logits instead logits that are processed? (see #17521 )"
] | 1,653
| 1,679
| 1,654
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
🚨🚨 **This PR can lead to silently changing values for users of `greedy_search` and `output_scores=True`. Please read the issue below** 🚨 🚨
Fixes https://github.com/huggingface/transformers/issues/17424
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17442/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17442",
"html_url": "https://github.com/huggingface/transformers/pull/17442",
"diff_url": "https://github.com/huggingface/transformers/pull/17442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17442.patch",
"merged_at": 1654001989000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17441/events
|
https://github.com/huggingface/transformers/pull/17441
| 1,249,722,806
|
PR_kwDOCUB6oc44h3Fo
| 17,441
|
[OPT] Fix bos token id default opt
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Opening PRs for all OPT models online as well",
"CI errors are flaky"
] | 1,653
| 1,653
| 1,653
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/17431
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17441/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17441",
"html_url": "https://github.com/huggingface/transformers/pull/17441",
"diff_url": "https://github.com/huggingface/transformers/pull/17441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17441.patch",
"merged_at": 1653582252000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17440/events
|
https://github.com/huggingface/transformers/pull/17440
| 1,249,574,849
|
PR_kwDOCUB6oc44hXqf
| 17,440
|
Pin protobouf that breaks TensorBoard in PyTorch
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Yes to rebuliding a docker image ASAP. I will merge this once all tests pass :-) ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Python dependencies 🤦♀️"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
The recent release of Protobuf (4.21) has broken TensorBoard in PyTorch and thus multiple tests. This PR pins protobuf to fix said tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17440/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17440/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17440",
"html_url": "https://github.com/huggingface/transformers/pull/17440",
"diff_url": "https://github.com/huggingface/transformers/pull/17440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17440.patch",
"merged_at": 1653573415000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17439/events
|
https://github.com/huggingface/transformers/pull/17439
| 1,249,536,777
|
PR_kwDOCUB6oc44hPo6
| 17,439
|
Fix model parallelism test
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
This fixes the model parallelism test for models whose config does not have a `num_hidden_layers` attribute, or if that attribute is a dict and not an int.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17439/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17439",
"html_url": "https://github.com/huggingface/transformers/pull/17439",
"diff_url": "https://github.com/huggingface/transformers/pull/17439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17439.patch",
"merged_at": 1653573432000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17438/events
|
https://github.com/huggingface/transformers/pull/17438
| 1,249,293,565
|
PR_kwDOCUB6oc44gdGZ
| 17,438
|
[wip] testing doc-build
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
testing https://github.com/huggingface/doc-builder/pull/228
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17438/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17438",
"html_url": "https://github.com/huggingface/transformers/pull/17438",
"diff_url": "https://github.com/huggingface/transformers/pull/17438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17438.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17437/events
|
https://github.com/huggingface/transformers/pull/17437
| 1,249,268,314
|
PR_kwDOCUB6oc44gX_4
| 17,437
|
OPT - Fix Softmax NaN in half precision mode
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @younesbelkada \r\n\r\n```\r\nexpanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])\r\n```\r\nSo when running in `half precision`, `_expand_mask` will use `torch.finfo(dtype)` with `dtype = inputs_embeds.dtype = fp16` and the `min` is `-65536`.\r\n\r\nAm I missing anything here?\r\n\r\nIs `fp32.min` used unexpectedly instead of `fp16.min` in this particular issue?\r\n\r\nI have a PR #17306 for related issue. If using `-65536` has issue, then I need to hold on that PR to investigate.",
"Hi @ydshieh !\r\nI think that you are right, when running in half precision I have `-65530` and not `-3.24e+38` in the attention mask as I said. But even with this mask I get NaNs on the padded hidden states for opt-1.3b, and upcasting the input to fp32 and casting back to fp16 seems to solve the issue for now ",
"> Hi @ydshieh ! I think that you are right, when running in half precision I have `-65530` and not `-3.24e+38` in the attention mask as I said. But even with this mask I get NaNs on the padded hidden states for opt-1.3b, and upcasting the input to fp32 and casting back to fp16 seems to solve the issue for now\r\n\r\nLet me check - as if this is the case, the PR #17306 needs to find another way out 😢 ",
"> I get NaNs on the padded hidden states for opt-1.3b, \r\n\r\n@younesbelkada \r\n\r\n- Could you point me which line in OPTModel you got `NaN for padded hidden states`?\r\n- Did you use the generation script in the linked issue, or you just run the model with some input ids? If it is the later case, could you provide the code snippet 🙏 please?",
"- I got NaNs exactly here: https://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/opt/modeling_opt.py#L217 - to fix it you can just do ` attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len).float() + attention_mask` [here](https://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/opt/modeling_opt.py#L214) and then `attn_weights = nn.functional.softmax(attn_weights, dim=-1).half()` [here](https://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/opt/modeling_opt.py#L217)\r\n- Yes use the generation script provided in the issue, ie:\r\n```\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n# I have tested and the error happens to opt-1.3b, opt-2.7b, opt-6.7b, and opt-13b.\r\n# opt-125m and opt-350m seems to work fine.\r\n# I haven't tested opt-30b.\r\nmodel_name = \"facebook/opt-1.3b\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)\r\ntokenizer.padding_side = \"left\"\r\n# It works when torch_dtype=torch.float32\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, use_cache=True)\r\nmodel = model.eval().to(\"cuda\")\r\n\r\nbatch = tokenizer(\r\n [\"Who are you?\", \"Joe Biden is the president of\"],\r\n padding=True, return_tensors=\"pt\"\r\n)\r\n\r\n# It produces NaN in the early layers for the first sequence.\r\n# I check the pattern, and NaN first appears in the padded token position.\r\ngreedy_output = model.generate(\r\n input_ids=batch[\"input_ids\"].to(\"cuda\"),\r\n attention_mask=batch[\"attention_mask\"].to(\"cuda\"),\r\n do_sample=False, top_k=0\r\n)\r\n```\r\nNote also that everything works fine when `torch_dtype` is set to `torch.float32` or `torch.bfloat16` ",
"@younesbelkada \r\n\r\nThe root cause is `-inf` is used here\r\n\r\nhttps://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/opt/modeling_opt.py#L64\r\n\r\nChange it to `mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min)` **should** be fine.\r\n(+/- inf * 0.0 will result NaN ).\r\n\r\n### More details\r\n\r\nWith the above fix, there is still a minor issue. In \r\n\r\nhttps://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/models/opt/modeling_opt.py#L525\r\n\r\nfor batch index 0, we will see an `-inf`\r\n\r\n```\r\ntensor([[[[-65504., -inf, -65504., -65504., -65504., -65504., -65504.],\r\n [-65504., -65504., -65504., -65504., -65504., -65504., -65504.],\r\n [-65504., -65504., 0., -65504., -65504., -65504., -65504.],\r\n [-65504., -65504., 0., 0., -65504., -65504., -65504.],\r\n [-65504., -65504., 0., 0., 0., -65504., -65504.],\r\n [-65504., -65504., 0., 0., 0., 0., -65504.],\r\n [-65504., -65504., 0., 0., 0., 0., 0.]]],\r\n```\r\n\r\nThis is because we have `-65504` for causal mask + `-65504` due to (left) padding.\r\nRegarding this part, we need to discuss with the team.\r\n\r\nIn general, we shouldn't have or use `-inf` (the only safe place to use it is immediately before the softmax).\r\n",
"Great! My suggestion is to mix both - we can force the attention mask to use -65504 for fp16 + upcast in fp32 and cast it back to fp16 after softmax for sanity check and avoid possible overflow issues. - Wdyt?",
"@stephenroller @suchenzang have you seen something similar in your training / inference runs? \r\n\r\nAlso cc @patil-suraj - see issue. Would be nice to hear your opinion here",
"FYI, it can happen that during training you never use padding tokens. I may be mistaken but I know that for Bloom we do not train on padded batch inputs but on truncated sequences instead. \nUsually these issues can happen at inference time only!",
"Upcast to fp32 should never be required if masked tokens are masked with something that's not -inf. Upcast to fp32 is significant performance penalty. Single `-inf` value shouldn't be a problem as long as there are some non-zero values in the row, it would change output a little bit but that output is meaningless anyway, the whole row is masked out. ",
"Great thank you all for your comments and help! Following your advice I have added the changes proposed by @ydshieh - let me know if this works for you!",
"[This change](https://github.com/huggingface/transformers/blob/77162b94bddd51bb57c712e973e23eed2cd39971/src/transformers/models/opt/modeling_opt.py#L64) is also in #17306, but I am fine for a quick fix for `OPTModel`.\r\n\r\nI would still like to point out that, although it is not useful for real usage of the model, leaving non-zero large negative values mixed with `-inf` to mask a whole sequence is not good for testing/debugging purpose -> but this could be addressed in another PR.\r\n\r\n",
"> [This change](https://github.com/huggingface/transformers/blob/77162b94bddd51bb57c712e973e23eed2cd39971/src/transformers/models/opt/modeling_opt.py#L64) is also in #17306, but I am fine for a quick fix for `OPTModel`.\r\n> \r\n> I would still like to point out that, although it is not useful for real usage of the model, leaving non-zero large negative values mixed with `-inf` to mask a whole sequence is not good for testing/debugging purpose -> but this could be addressed in another PR.\r\n\r\nForgot to say, with current change, it's still possible to get `[-inf, -inf, dtype.min, dtype.min ...]` or `[-inf, -inf, -inf]` etc. after summing with the `attn_weights` (as mentioned, this depends the values in `attn_weights`). I will try to implement some processing in #17306 today.",
"We perform the upcast in our code, though we do it with softmax(dtype=torch.float32). It's very important.",
"That's an excellent point, Stephen! Thank you for that crucial reminder.\r\n\r\nIndeed, for pytorch ops that support accumulate dtype arg this approach makes things much more efficient than manual casting.\r\n\r\nI remember I discovered that when optimizing the `LabelSmoother`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/7999ec125fc31428ed6879bf01bb013483daf704/src/transformers/trainer_pt_utils.py#L481\r\n\r\nit made a huge difference.",
"> accumulate dtype arg this approach makes things much\r\n\r\nFor learning purpose, could you share why `using softmax(dtype=torch.float32)` is more efficient than explicit upcasting?",
"Because the op kernel does it automatically internally in a single operation by already accumulating in the correct dtype.\r\n\r\nWhen you do it in 2 steps: `op(...).to(dtype=...)`, 2 additional memory copying operations have to happen to perform the casting.\r\n\r\n@ngimel, did I explain that correctly? Thank you!\r\n\r\nand it should be simple to benchmark the 2 cases to see the difference.",
"@Chillee, would `nvfuser` fuse explicit casting into the op's accumulate dtype automatically?\r\n",
"My original understanding of the process is like:\r\n\r\n```\r\nattn_scores = attn_scores.to(torch.float32)\r\nattn_prob = nn.functional.softmax(attn_scores)\r\n```\r\n\r\nSo I think the correct way should be:\r\n```\r\nattn_prob = nn.functional.softmax(attn_scores, dtype=torch.float32)\r\n```\r\nright?\r\n\r\n### Another question regarding dtype\r\n\r\nAfter we get `attn_prob` in `float32`, should we cast it back to the target precision for the subsequential ops, like\r\n```\r\nattn_output = torch.bmm(attn_probs, value_states)\r\n```\r\nI am talking about the case where a user loads the models in fp16 and specify the inputs in fp16 too:\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)\r\n```\r\n - If we don't cast `attn_probs` back to the target type (here `fp16`)\r\n - it will fail (if `value_states` is `fp16`) for some op like `torch.bmm`\r\n - or will propagate the type fp32 for some simple ops (like `+`)\r\n\r\n(I am not sure this is the correct/usual way to do inference in fp16, but this is what I see in the code snippet from the issue reporter)",
"I think the issue that this PR aims to address is not really about the upcast to float32. (@younesbelkada , right?)\r\n\r\nIt is mentioned in the PR description as a potential solution, but the original issue we want to address here comes from the fact that we get a sequence with all `-inf` as attention scores before `softmax`.\r\n\r\nMaybe it it better to move the discussion(s) regarding the upcasting to another issue/PR page.",
"That's correct, the underlying issue is that for a row full of `-inf` softmax (by definition) produces `nan` (it's 0/0), and ideally that shouldn't be a problem because those fully masked row shouldn't participate in loss computation, but apparently they do and corrupt other values",
"> My original understanding of the process is like:\r\n> \r\n> ```\r\n> attn_scores = attn_scores.to(torch.float32)\r\n> attn_prob = nn.functional.softmax(attn_scores)\r\n> ```\r\n> \r\n> So I think the correct way should be:\r\n> \r\n> ```\r\n> attn_prob = nn.functional.softmax(attn_scores, dtype=torch.float32)\r\n> ```\r\n> \r\n> right?\r\n\r\nIt's correct. Just be aware that not all ops support this.\r\n\r\n> ### Another question regarding dtype\r\n> \r\n> After we get `attn_prob` in `float32`, should we cast it back to the target precision for the subsequential ops, like\r\n> \r\n> ```\r\n> attn_output = torch.bmm(attn_probs, value_states)\r\n> ```\r\n> \r\n> I am talking about the case where a user loads the models in fp16 and specify the inputs in fp16 too:\r\n> \r\n> ```\r\n> model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)\r\n> ```\r\n> \r\n> * If we don't cast `attn_probs` back to the target type (here `fp16`)\r\n> \r\n> * it will fail (if `value_states` is `fp16`) for some op like `torch.bmm`\r\n> * or will propagate the type fp32 for some simple ops (like `+`)\r\n> \r\n> \r\n> (I am not sure this is the correct/usual way to do inference in fp16, but this is what I see in the code snippet from the issue reporter)\r\n\r\nYes, you definitely need to adjust the dtype to the one you expect.\r\n\r\nIn some cases it's enough to turn autocast off locally to have the whole ensemble automatically done in the right precision w/o any additional casting back and forth. For example see this workaround proposed for the t5 arch:\r\n\r\nhttps://github.com/huggingface/transformers/pull/10956/files\r\n\r\n```\r\n def forward(self, hidden_states):\r\n # many t5/mt5 models are trained in bfloat16 and don't do well under mixed precision (fp16).\r\n # It appears that it's enough to disable autocast for this FF layer to avoid inf/nan\r\n # problems for the whole model\r\n if torch.is_autocast_enabled():\r\n with torch.cuda.amp.autocast(enabled=False):\r\n return self._forward(hidden_states)\r\n else:\r\n return self._forward(hidden_states)\r\n```\r\n",
"Hi @ydshieh, I am down for both solutions. We can either merge this PR as a quick patch for OPT or wait for #17306 to be merged! \r\nI can also open another PR to move the whole discussion around the upcasting issue (I think we need to address since it is done in the original OPT pipeline if I understood it right) - let me know what works best for you ;) ! ",
"@younesbelkada #17306 still needs some more reviews. So let's just see which PR is approved earlier and merge as it is.",
"@younesbelkada \r\n\r\nSince #17306 won't be merged at this moment, I guess you can try something like\r\n(and see if the reviewers & pytorch experts like it 🙏 )\r\n\r\n```\r\n# change `-inf` to `dtype.min` to avoid `NaN` during `softmax`.\r\nattn_scores = torch.max(attn_scores, torch.finfo(attn_scores.dtyte).min)\r\n\r\nattn_prob = torch.nn.functional.softmax(attn_scores, ...) \r\n```",
"Hi all, \r\nI propose a fix in the latest commits as suggested by @ydshieh . To make it work I basically:\r\n1- pre-process the att scores (suggestion by @ydshieh )\r\n2- upcast the softmax in fp32 and cast it back to the original `dtype` (for consistency with what is done in the original implementation)\r\nI also added a slow test to make sure these things do not happen in the future with OPT\r\n\r\nI can also confirm that all slow test passes with this fix! Let me know what do you think ;) \r\n\r\ncc @ydshieh @patrickvonplaten ",
"@patil-suraj could you also take a quick look? :-)",
"Just a quick question before we merge. With this fix this issue here is solved: https://github.com/huggingface/transformers/issues/17433 ? What do the generations now give with this fix? Also I'm wondering a bit whether this is rather just because the model weights might be incorrect see: https://github.com/huggingface/transformers/issues/17653 \r\n\r\nShould we maybe rather wait with this one until we have #17653 resolved? Or could we maybe run the examples of #17653 with this fix and see if we get better results?",
"@patrickvonplaten I can confirm that this implementation fixes #17433 - I added a test to make sure that this will never be produced again. The generations gave `Who are you? What do you want? What do you want? What do you` with this fix instead of `Who are you? <\\s> <\\s> <\\s> <\\s>` \r\nI think that we can merge this at least to fix this behavior + to make OPT implementation consistent with the one from Meta, since we do not upcast the softmax to float32 in our implementation. \r\nI think the problems with #17653 are related with the TP merging strategy "
] | 1,653
| 1,664
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix overflow / unstable operation issues when using large OPT models in half precision
- As it is done in [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/908dc9cb4b9717707241eaf8b92a986b2e251218/megatron/model/fused_softmax.py#L205), for large models it appears that you will have to first upcast the input to float32 before applying the Softmax function to avoid unexpected NaNs. This is because we use very large values (eg `-3.24e+38`) to mask the padded tokens. EDIT: it seems to we use correct values to mask padded tokens
- Linked issue: #17433
- We'll probably need to re-compute the logits for slow tests but I am not sure
cc @patrickvonplaten @ArthurZucker @ydshieh @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17437/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17437",
"html_url": "https://github.com/huggingface/transformers/pull/17437",
"diff_url": "https://github.com/huggingface/transformers/pull/17437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17437.patch",
"merged_at": 1656522932000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17436/events
|
https://github.com/huggingface/transformers/pull/17436
| 1,249,251,277
|
PR_kwDOCUB6oc44gUf0
| 17,436
|
improve no-trainer examples
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Opened a new PR on the branch of main repo instead of fork #17449 "
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
1. In no-trainer examples, `train_loss` being logged wasn't normalized and as such wasn't intuitive to understand. This also made it difficult to compare train loss between different tools such as comparing train loss from Trainer with that of Accelerate. This PR normalizes the train_loss per epoch to make is more intuitive and comparable.
2. Replaces HF AdamW with torch AdamW for NLP no-trainer examples. This prevents corresponding warning being displayed.
3. Fixing no-trainer examples so that tracker run is created only for the main process else wandb will create num_processes runs with no data.
4. converting `train_loss` from tensor to float so that it gets logged in tensorboard tracker
5. Fixing `run_ner_no_trainer.py` to correctly log `train_loss` in `all_results.json`
6. Adding `report_to` arg to enable users to specify preferred tracker instead of all available trackers which is default option. This prevents logging to the trackers that user doesn't want.
7. In many no-trainer NLP tasks one can train model from scratch, this means that user can bypass `model_name_or_path` arg. However, it is set as required for all scripts which throws error when it isn't specified. Setting this arg `required=False` in corresponding examples to resolve the error when training from scratch.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17436/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17436",
"html_url": "https://github.com/huggingface/transformers/pull/17436",
"diff_url": "https://github.com/huggingface/transformers/pull/17436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17436.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17435/events
|
https://github.com/huggingface/transformers/pull/17435
| 1,249,246,541
|
PR_kwDOCUB6oc44gTiJ
| 17,435
|
Fix doc builder Dockerfile
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The goal of running it inside the dockerfile is to ensure it actually works before publishing the image, so that it doesn't fail at runtime. Do you know why it failed in the first place?",
"Thank you, I understand it better now. I will check why it fails inside docker build.",
"@LysandreJik \r\n\r\nThe problem comes from the fact that `$PR_NUMBER` is not defined in the docker image build (`doc-builder`). We can use `main` instead, right?\r\n\r\nFrom `--help`, I saw\r\n\r\n```\r\n--version VERSION Version of the documentation to generate. Will default to the version of the package module (using `main` for a version containing dev).\r\n```",
"Changed `pr_$PR_NUMBER` to `main`.",
"Changed to \r\n\r\n```\r\nRUN doc-builder build transformers transformers/docs/source/en --build_dir doc-build-dev --notebook_dir notebooks/transformers_doc --clean --version main\r\n```\r\nworks.\r\n\r\nJob run page\r\nhttps://github.com/huggingface/transformers/runs/6775059643?check_suite_focus=true"
] | 1,653
| 1,655
| 1,655
|
COLLABORATOR
| null |
# What does this PR do?
Fix the docker file in `transformers-doc-builder`.
~~(We don't need to run `doc-builder build` in the DockerFile, right? I think it is only for the CI runs.)~~
Currently, `Doc builder (Docker image build)` fails, see [this run](https://github.com/huggingface/transformers/actions/runs/2388105533).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17435/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17435",
"html_url": "https://github.com/huggingface/transformers/pull/17435",
"diff_url": "https://github.com/huggingface/transformers/pull/17435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17435.patch",
"merged_at": 1655193528000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17434/events
|
https://github.com/huggingface/transformers/pull/17434
| 1,249,221,090
|
PR_kwDOCUB6oc44gOWT
| 17,434
|
Docker image build in parallel
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"From the job run page, I saw\r\n\r\n```\r\n#18 exporting to image\r\n#18 pushing layers 35.6s done\r\n#18 pushing manifest for docker.io/huggingface/transformers-all-latest-gpu:latest@sha256:d8523684a112bff61a2899a69e06e05e26c507778df4754454b95c3dcf244012\r\n#18 pushing manifest for docker.io/huggingface/transformers-all-latest-gpu:latest@sha256:d8523684a112bff61a2899a69e06e05e26c507778df4754454b95c3dcf244012 0.3s done\r\n#18 DONE 289.9s\r\nImageID\r\n sha256:08ed1b5cc8db313f116b58d86292b2e109f0552737088fba4a5c672012bca3ae\r\nDigest\r\n sha256:d8523684a112bff61a2899a69e06e05e26c507778df4754454b95c3dcf244012\r\n```\r\nso it looks fine to me. But I will run it again and verify the images on docker hub to make sure!"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Remove `needs` in `.github/workflows/build-docker-images.yml`, as it can run in parallel now.
See this [run page](https://github.com/huggingface/transformers/actions/runs/2389099113) v.s. the [previous run page](https://github.com/huggingface/transformers/actions/runs/2388105533), with 14 mins. v.s. 40 mins.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17434/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17434",
"html_url": "https://github.com/huggingface/transformers/pull/17434",
"diff_url": "https://github.com/huggingface/transformers/pull/17434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17434.patch",
"merged_at": 1654004343000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17433/events
|
https://github.com/huggingface/transformers/issues/17433
| 1,249,018,350
|
I_kwDOCUB6oc5KcoHu
| 17,433
|
OPT produce NaN during batched generation
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @shijie-wu ! Thanks for pointing out the issue,\r\nIt appears that in our current implementation we use a naive Softmax function that is applied to the attention scores. When passing the attention scores combined with the attention mask we use very large values to mask padding tokens such as `torch.inf` or `-3.24e+38` on the softmax function. (EDIT: see #17437 - we use correct padding values and not `-3.24e+38`)\r\nIt seems that this sometimes leads to unstable operations and results in having NaNs when using half precision mode, but only for large models. \r\nI think that the correct workaroud is to upcast the attention scores to float32 before summing it with the attention mask, apply the Softmax then cast it back to the input dtype as it is done in [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/908dc9cb4b9717707241eaf8b92a986b2e251218/megatron/model/fused_softmax.py#L204) for eg. \r\nA proper fix will be addressed in #17437 but a quick and dirty solution would be to use `bfloat16` instead of `float16`. At least it worked with `opt-1.3b` but I don't know if it will work with larger models.\r\nLet me know if this helps!",
"Hi @younesbelkada ! Thank you for the quick response! I will follow https://github.com/huggingface/transformers/pull/17437. A fix for fp16 would be great as only A100 class support bf16 AFAIK.\r\n\r\nOut of curiosity, do you have any intuition on why it only impacts larger models but not smaller models? From https://github.com/huggingface/transformers/pull/17437, it seems to me that it would impact smaller models as well?",
"Thanks for the comment! \r\nWe will try to have a patch to fix that for fp16 asap I guess ;) Curious to know if the proposed PR will fix your issue (you can checkout the PR and build it from source if you have time)!\r\n\r\nRegarding your second question - I totally agree with you - it should also not work on small models. It is just an intuition but possibly the number of heads and/or hidden dimension are impacting that (since it is the only thing that differs between `opt-125m` and `opt-1.3b` in the first layers). I would wait for the team's comments to see if they have better intuition on that! ",
"Interesting! Thanks a lot for reporting this @shijie-wu \r\n\r\n@shephenroller @suchenzang do you have any insight here maybe?\r\n\r\nAlso cc @patil-suraj, I remember we had a similar problem with GPT-Neo/GPT-J no? Was the solution to force the last computation of the logits to be in fp32?",
"Hi @shijie-wu !\r\nAfter discussing with @ydshieh , it appears that this is completely independent from the model size but it just happened by luck that the logits before the softmax were negative only for large models - therefore causing an overflow that leaded to NaNs. \r\nI am quoting his answer here:\r\n\r\n> And back to your question about \"why only large model\":\r\nThis is about the weights and inputs. In the generation script provided in the issue, when I run it, in second time passing Attention Layer, there is some point we get [-16.x, -16.x, ....] as attn_weights , and attn_mask as [-65504, -65504, -inf, -inf, ...]\r\nThe 2 -65504 from left padding, -inf from causal mask .\r\nHowever in fp16, -65504 + -16 = -inf . So we get a batch index with all -inf as input to softmax , and get outputs NaN`\r\n\r\nVery interesting!",
"FYI (if you want to know more details)\r\n\r\nhttps://github.com/huggingface/transformers/pull/17306#issuecomment-1138660341",
"> Also cc @patil-suraj, I remember we had a similar problem with GPT-Neo/GPT-J no? Was the solution to force the last computation of the logits to be in fp32?\r\n\r\n@patrickvonplaten I dunno about GPT-Neo/GPT-J but can vouch for the fact that the same thing is happening with GPT-NeoX currently.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The PR https://github.com/huggingface/transformers/pull/17437 has been merged, @shijie-wu could you confirm this fixed your issue?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
* transformers==4.19.2
* PyTorch (GPU?): 1.11.0+cu102 (True)
* GPUs: single V100
### Who can help?
@LysandreJik, @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# I have tested and the error happens to opt-1.3b, opt-2.7b, opt-6.7b, and opt-13b.
# opt-125m and opt-350m seems to work fine.
# I haven't tested opt-30b.
model_name = "facebook/opt-1.3b"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
tokenizer.padding_side = "left"
# It works when torch_dtype=torch.float32
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
model = model.eval().to("cuda")
batch = tokenizer(
["Who are you?", "Joe Biden is the president of"],
padding=True, return_tensors="pt"
)
# It produces NaN in the early layers for the first sequence.
# I check the pattern, and NaN first appears in the padded token position.
model.generate(
input_ids=batch["input_ids"].to("cuda"),
attention_mask=batch["attention_mask"].to("cuda"),
do_sample=True, max_new_tokens=32
)
```
### Expected behavior
The generation under fp16 should be close to fp32.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17433/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17432/events
|
https://github.com/huggingface/transformers/issues/17432
| 1,249,016,882
|
I_kwDOCUB6oc5Kcnwy
| 17,432
|
Make all configs nicely readable
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] |
closed
| false
| null |
[] |
[
"in strong favor of this!",
"PR: https://github.com/huggingface/transformers/pull/17457"
] | 1,653
| 1,654
| 1,654
|
MEMBER
| null |
### Feature request
All configs nicely readable (tokenizers & feature extractor)
### Motivation
https://huggingface.co/facebook/opt-30b/discussions/1
### Your contribution
Happy to do it early next week - happy if someone else wants to take it over though!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17432/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17432/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17431/events
|
https://github.com/huggingface/transformers/issues/17431
| 1,248,944,934
|
I_kwDOCUB6oc5KcWMm
| 17,431
|
Mismatch of special token ids between config and tokenizer config
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for the report @shijie-wu ! That's indeed a bug in the model's config. Will update all of them now. The BOS token is identical to the EOS token and should therefore be 2 (=> the tokenizer has it correct here).",
"Also for the record, this is not a critical bug since in 99% of the times a user prompts OPT with something. This means the user passes a string through the tokenizer and then to the model:\r\n\r\n```py\r\ninput_ids = tokenizer(\"some prompt\", return_tensors=\"pt\").input_ids\r\nsequence = opt.generate(input_ids)\r\n```\r\n\r\nIn this case the tokenizer **always** correctly prepends the EOS token. The only time when the model config would cause a bug is if the user would generate from an empty prompt:\r\n\r\n```py\r\nsequence = opt.generate()\r\n```",
"Thanks for updating OPT!\r\n\r\nI understand this won't cause bug in most cases. But hypothetically speaking, if users misconfig `eos_token` in `model.config` and it doesn't match `tokenizer.eos_token`, it would cause the generation to cut short silently. I understand having an assertion might not make sense but documenting it somewhere might be helpful?",
"> Also for the record, this is not a critical bug since in 99% of the times a user prompts OPT with something. This means the user passes a string through the tokenizer and then to the model:\r\n> \r\n> ```python\r\n> input_ids = tokenizer(\"some prompt\", return_tensors=\"pt\").input_ids\r\n> sequence = opt.generate(input_ids)\r\n> ```\r\n> \r\n> In this case the tokenizer **always** correctly prepends the EOS token. The only time when the model config would cause a bug is if the user would generate from an empty prompt:\r\n> \r\n> ```python\r\n> sequence = opt.generate()\r\n> ```\r\n\r\nHi, @patrickvonplaten I notice that the tokenizer of OPT models uses `</s>` for `eos_token`, `bos_token` and `unk_token` in `special_tokens_map`. Is it intended?\r\n```\r\n# transformers 4.20.1\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained('facebook/opt-1.3b') # all OPT models\r\nprint(tokenizer.special_tokens_map)\r\n# {'bos_token': '</s>', 'eos_token': '</s>', 'unk_token': '</s>', 'pad_token': '<pad>'}\r\n```\r\nAnother issue I found is the the vocab size in the tokenizer does not match the size of embedding module of OPT models. Tokenizer has vocab size 50265 while the embedding table in opt models has 50272.\r\n```\r\n# transformers 4.20.1\r\nfrom transformers import AutoModel, AutoTokenizer\r\nmodel = AutoModel.from_pretrained('facebook/opt-1.3b') # all OPT models\r\nprint(\"Embedding table: \" ,model.decode.embed_tokens.weight.shape[0])\r\n# Embedding table: 50272\r\ntokenizer = AutoTokenizer.from_pretrained('facebook/opt-1.3b') # all OPT models\r\nprint(\"Vocab size:\", tokenizer.vocab_size)\r\n# Vocab size: 50265\r\n```\r\n\r\nJust to confirm is it a bug or intended? Thanks.\r\n\r\n",
"Hey @git-xp, \r\n\r\nYes OPT indeed uses the same token for both `bos_token` and `eos_token` being `</s>`. \r\n\r\nThe `unk_token` should actually never really be produced by the tokenizer, since the tokenizer iss based on byte-level Byte-Pair-Encoding and thus will always produce a valid token, no matter what the input (cc @SaulLu just to verify that's correct what I'm saying here)\r\nAlso note that OPT's tokenizer is fully based on GPT2's tokenizer which also uses the same token for all BOS, EOS and UNK.\r\n\r\nNow regarding the 2nd question, yes it's expected that the OPT model has more vocab entries in the model entry than the tokenizer has tokens. The final tokens of the model are simply never used (they've just been added so that the model has a weight matrix that's a better \"power of 2\" matrix - *i.e.* 50272 is divisible by 2**5 whereas 50265 is not divisible by 2 at all)",
"Your explanation that a byte-level tokenizer has no use as an unknown token is perfect @patrickvonplaten ! :+1: "
] | 1,653
| 1,661
| 1,653
|
CONTRIBUTOR
| null |
### System Info
```shell
main branch
```
### Who can help?
@SaulLu, @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Example mismatch:
* OPT-* has `bos_token_id` set to 0 in `config.json` while `bos_token` set to `</s>` (which leads to `bos_token_id` as 2) in `tokenizer_config.json`. As a result, `model.config.bos_token_id != tokenizer.bos_token_id`. It might cause subtle bug during generation as the `.generate` load special token ids from `model.config` by default (I don't think it will cause bug for OPT but it might cause subtle bug if `eos_token` is mismatch)
https://github.com/huggingface/transformers/blob/8f46ac98498dd47701971064617e00d7e723a98e/src/transformers/generation_utils.py#L1123-L1134
* Others models might have similar issues.
### Expected behavior
Ideally we would have a single source of truth for special tokens id, if not, we might want to have some assertions to see if there's any mismatch or document this potential pitfall. I understand tokenizer and model are decoupled so it might be hard to introduce assertion in the library, and I am not sure how realistic to have some sort of unittests for mismatch for the model zoo.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17431/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17430/events
|
https://github.com/huggingface/transformers/issues/17430
| 1,248,935,332
|
I_kwDOCUB6oc5KcT2k
| 17,430
|
Logits size does not match vocabulary size when fine-tuning Hubert large with pyctcdecode
|
{
"login": "changyeli",
"id": 9058204,
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changyeli",
"html_url": "https://github.com/changyeli",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"repos_url": "https://api.github.com/users/changyeli/repos",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @changyeli,\r\n\r\nI sadly cannot rerun your code to reproduce the error. Could you try to send a **minimal**, **fully reproducible** code snippet?\r\nE.g. I don't have access to `f\"../{ngram}gram_correct.arpa\"`",
"Hey @patrickvonplaten unfortunately it's a protected corpus so I can't upload the full file here. Will random/first 20 lines from this file work in this case?",
"Please don't upload the whole `corpus` - I can try to help, if it's just a some dummy examples. It would be amazing if you could try to make the reproducible code snippet to run as fast as possible and to be as short as possible",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hello, I have a very similar issue [like this one](https://github.com/huggingface/transformers/issues/15392) for Hubert large. I got this logits size value error when fine-tuning Hubert model with pyctcdecode
Tried with the [previous comment on this issue](https://github.com/huggingface/transformers/issues/15392#issuecomment-1024905216). Setting both `eos_token` and `bos_token` did not work and returned the same error.
Here is the code snippet I used for a single audio file processing and debugging
```python
processor = Wav2Vec2Processor.from_pretrained(
"facebook/hubert-large-ls960-ft",
eos_token=None, bos_token=None)
tokenizer_vocab_dict = processor.tokenizer.get_vocab()
tokenizer_vocab_lowercase = {k.lower(): v for k,v in tokenizer_vocab_dict.items()}
with open("../vocab/vocab.json", "w", encoding="utf-8") as f:
f.write(json.dumps(tokenizer_vocab_lowercase, ensure_ascii=False))
processor.tokenizer = Wav2Vec2CTCTokenizer("../vocab/vocab.json")
processor.save_pretrained("../processor-lm")
ngram = 3
from pyctcdecode import build_ctcdecoder
from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2CTCTokenizer, Wav2Vec2ProcessorWithLM
cus_processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
vocab_dict = cus_processor.tokenizer.get_vocab()
sorted_vocab_dict = {
k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path=f"../{ngram}gram_correct.arpa",
)
processor = Wav2Vec2ProcessorWithLM(
feature_extractor=cus_processor.feature_extractor,
tokenizer=cus_processor.tokenizer,
decoder=decoder
)
model = AutoModelForCTC.from_pretrained(
"facebook/hubert-large-ls960-ft",
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id)
model.freeze_feature_encoder()
# test run
s = "sentence from audio file"
audio_input, sample_rate = sf.read("audio_loc")
inputs = processor(
audio_input, sampling_rate=sample_rate, return_tensors="pt")
with processor.as_target_processor():
labels = np.asarray(processor(s, padding=True).input_ids)
print(f"target processor logit shape: {labels.shape}")
with torch.no_grad():
logits = model(**inputs).logits
print(f"logit shape returned by the model: {logits.shape}")
transcription = processor.batch_decode(logits.numpy()).text[0]
text = processor.decode(labels)
```
### Expected behavior
```python
with processor.as_target_processor():
labels = np.asarray(processor(s, padding=True).input_ids)
```
`labels` should be a vector with size of 32, then it can be sent to the `map_to_result` and `compute_metrics' functions mentioned in [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) with some changes listed below.
```python
def map_to_result(batch):
"""
batchfy and map the hidden states into transcript
:param batch: _description_
:type batch: _type_
"""
#model.to("cuda")
inputs = processor(
batch["speech"],
sampling_rate=batch["sampling_rate"],
return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
batch["pred_str"] = processor.batch_decode(logits.numpy()).text[0]
batch["text"] = processor.decode(batch["labels"])
return batch
def compute_metrics(pred):
"""
batchfy and compute the WER metrics
:param pred: _description_
:type pred: _type_
:return: _description_
:rtype: _type_
"""
wer_metric = load_metric("wer")
transcription = processor.batch_decode(pred.predictions.numpy())
pred_str = transcription.text[0]
# we do not want to group tokens when computing the metrics
label_str = processor.decode(pred.label_ids.numpy())
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
```
But it returned a vector with size of 45, thus `text = processor.decode(np.asarray(labels))` got the ValueError for logits size unmatched. As results, `map_to_result` and `compute_metrics` also cannot be run during the fine-tuning process. I was wondering if it needs a similar fix as mentioned in [this issue](https://github.com/huggingface/transformers/issues/15392). If not, do you have any suggestions or commends on solving this issue? Thanks in advance.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17430/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17429/events
|
https://github.com/huggingface/transformers/issues/17429
| 1,248,701,914
|
I_kwDOCUB6oc5Kba3a
| 17,429
|
raise ValueError("You have to specify either input_ids or inputs_embeds")
|
{
"login": "Ngheissari",
"id": 83084391,
"node_id": "MDQ6VXNlcjgzMDg0Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/83084391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ngheissari",
"html_url": "https://github.com/Ngheissari",
"followers_url": "https://api.github.com/users/Ngheissari/followers",
"following_url": "https://api.github.com/users/Ngheissari/following{/other_user}",
"gists_url": "https://api.github.com/users/Ngheissari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ngheissari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ngheissari/subscriptions",
"organizations_url": "https://api.github.com/users/Ngheissari/orgs",
"repos_url": "https://api.github.com/users/Ngheissari/repos",
"events_url": "https://api.github.com/users/Ngheissari/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ngheissari/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
},
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting, I'll take a look at this",
"Hi, @Ngheissari \r\n\r\nFor `VisionEncoderDecoderModel`, we have to provide the following as the inputs\r\n- `pixel_values`\r\n- either `decoder_input_ids` or `labels`\r\n\r\nIn you code snippet, you only prepare `pixel_values`, that's why the error occurs. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Thanks for looking into this @ydshieh! Closing this issue."
] | 1,653
| 1,703
| 1,659
|
NONE
| null |
### System Info
Hi,
I keep getting this error:
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
### Sample code :
```shell
# Initializing a ViT & BERT style configuration
config_encoder = ViTConfig()
config_decoder = BertConfig()
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
# Initializing a ViTBert model from a ViT & bert-base-uncased style configurations
model = VisionEncoderDecoderModel(config=config)
# Accessing the model configuration
config_encoder = model.config.encoder
config_decoder = model.config.decoder
# set decoder config to causal lm
config_decoder.is_decoder = True
config_decoder.add_cross_attention = True
# Saving the model, including its configuration
model.save_pretrained("my-model")
# loading model and config from pretrained folder
encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained("my-model")
model = VisionEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
# load image from the IAM dataset
url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code is above from the samples provided.
### Expected behavior
```shell
I get this error :
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17429/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17428/events
|
https://github.com/huggingface/transformers/pull/17428
| 1,248,683,892
|
PR_kwDOCUB6oc44ebaf
| 17,428
|
Disk offload fix
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes the disk offloading for pretrained models (requires latest accelerate main branch) and adds a test. The test passes locally for GPT-2, GPT-J, OPT and T5 (the only model where it's activated right now).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17428/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17428",
"html_url": "https://github.com/huggingface/transformers/pull/17428",
"diff_url": "https://github.com/huggingface/transformers/pull/17428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17428.patch",
"merged_at": 1654002978000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17427/events
|
https://github.com/huggingface/transformers/pull/17427
| 1,248,652,018
|
PR_kwDOCUB6oc44eUp3
| 17,427
|
Add TF ResNet model
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Swapping @sgugger and @LysandreJik as Lysandre's off and adding @Rocketknight1 for the TF side. ",
"I'm seeing a failure in `test_keras_fit` - it looks like the outputs are different depending on whether the labels are passed in the input dict or separately. That might actually have nothing to do with the labels and instead be caused by some random differences in the model outputs, though - maybe the `training` flag isn't being passed correctly so layers like dropout are still being run in training mode during eval time? Alternatively, maybe the tolerances we use for NLP models are just too strict for this one?",
"Please also incorporate the updates made in #17731 ",
"> I'm seeing a failure in test_keras_fit - it looks like the outputs are different depending on whether the labels are passed in the input dict or separately. That might actually have nothing to do with the labels and instead be caused by some random differences in the model outputs, though - maybe the training flag isn't being passed correctly so layers like dropout are still being run in training mode during eval time? Alternatively, maybe the tolerances we use for NLP models are just too strict for this one?\r\n\r\n@Rocketknight1 Digging into this - I believe this is because of the batch norm layers. Every time the layer is called it updates its `moving_mean` and `moving_variance` parameters. During training, the batches are normalised based on the batch stats, which will be exactly the same for both fit calls, because the data isn't shuffled. And we see this - the training loss for the two histories in `test_keras_fit` are exactly the same. However, at inference the batches are normalised based on the `moving_mean` and `moving_var` params. I'm not really sure how to address this. @ydshieh have we handled anything like this with tests before? \r\n\r\nWeirdly, the test was passing before. I'm guessing just a fluke? ",
"Ahhh, of course! I had thought that running a single iteration of training with a learning rate of 0 would leave the weights unchanged, but that isn't true for `BatchNorm`, because `BatchNorm` weights aren't updated by gradient descent. The test was broken and we only got away with it because NLP models generally don't use `BatchNorm`. I'll fix it tomorrow!",
"@sgugger Sorry - I didn't mean to re-request for you as you'd already approved! ",
"@NielsRogge @Rocketknight1 friendly nudge - let me know if there's any other changes or if I'm good to merge :) "
] | 1,653
| 1,656
| 1,656
|
COLLABORATOR
| null |
Adds a tensorflow implementation of the ResNet model + associated tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17427/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17427",
"html_url": "https://github.com/huggingface/transformers/pull/17427",
"diff_url": "https://github.com/huggingface/transformers/pull/17427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17427.patch",
"merged_at": 1656928756000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17426/events
|
https://github.com/huggingface/transformers/pull/17426
| 1,248,633,822
|
PR_kwDOCUB6oc44eQt6
| 17,426
|
TF: GPT-2 generation supports left-padding
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@patrickvonplaten made the changes we talked about 👍 ~~There are a couple of tests failing, but I don't think they are related to these changes (like torch tests)~~ ",
"Cool ! Looks good to me, if possible it'd be great if @sgugger could take a quick look here since the discussed logic of how to handle \"automatic creation\" of the `attention_mask` is a bit universal in Transformers.",
"@sgugger, in short we have the following situation for the automatic attention mask creation. \r\n\r\nWe **never** do this in the forward pass, but since a long time we have such a feature implementation implemented in PyTorch's generate: https://github.com/huggingface/transformers/blob/d156898f3b9b2c990e5963f5030a7143d57921a2/src/transformers/generation_utils.py#L490\r\n\r\nSo we won't be able to change that back in PyTorch (except for a major version). For now we do the following in PyTorch which handles the attention_mask creation correctly in 99% of the cases:\r\n- If user doesn't provide the attention_mask **and** the padding token is in the input_ids **and** the padding token is not equal to the EOS token, we create an attention_mask automatically\r\n\r\nThis doesn't cover the edge-case where the user forwards both padding tokens and eos tokens + they are the same. Think the edge-case is really an edge case, but overall we should nudge the user to **always** provide an attention_mask if doing generate in batches. \r\n\r\n=> As a conclusion, we've now copied the PT logic 1-to-1 to TF generate & added a warning. After this PR is merged we should also add this warning to PT IMO. \r\n\r\nDoes this sound good to you? ",
"Related: https://github.com/huggingface/transformers/pull/17444",
"Sounds good to me @patrickvonplaten !",
"Feel free to merge whenever @gante !",
"@gante King of TF `generate`!"
] | 1,653
| 1,654
| 1,654
|
MEMBER
| null |
# What does this PR do?
This PR does two things:
1. Enables left-padding with GPT-2 generation.
- It was working before only with XLA and was left as a TODO;
- 🚨 Naturally, tests had to be changed. In the batched tests, the shortest sequence is now different, as a consequence of the correct processing of the left-padding;
- Because we now have non-XLA left-padding, the XLA/non-XLA equivalence tests for GPT-2 now have two entries with different lengths;
- An additional test was added, to ensure the output is the same regardless of left-padding.
2. Fix minor issues and TODOs in TF generate. In particular, I'd highlight the following:
- All generated arrays are initialized with the `pad_token_id`, as opposed to with `0`. This was already present in `beam_search`, as one test caught it there;
- Corrects the number of iterations in greedy search and sample -- it was one iteration short and resulting in outputs with `max_length-1` when length was the constraint (also an argument in favor by contradiction: if this change resulted in too many iterations, we would be attempting to write out of bounds of the `TensorArray`, which isn't the case)
___________________
Locally run slow tests: GPT-2, T5, BART, RAG, Encoder_Decoder, Vision_Encoder_Decoder, Speech2Text
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17426/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17426",
"html_url": "https://github.com/huggingface/transformers/pull/17426",
"diff_url": "https://github.com/huggingface/transformers/pull/17426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17426.patch",
"merged_at": 1654002405000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17425/events
|
https://github.com/huggingface/transformers/issues/17425
| 1,248,631,378
|
I_kwDOCUB6oc5KbJpS
| 17,425
|
The tokenizer config for OPT-30B is missing a pad token
|
{
"login": "aninrusimha",
"id": 30733039,
"node_id": "MDQ6VXNlcjMwNzMzMDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aninrusimha",
"html_url": "https://github.com/aninrusimha",
"followers_url": "https://api.github.com/users/aninrusimha/followers",
"following_url": "https://api.github.com/users/aninrusimha/following{/other_user}",
"gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions",
"organizations_url": "https://api.github.com/users/aninrusimha/orgs",
"repos_url": "https://api.github.com/users/aninrusimha/repos",
"events_url": "https://api.github.com/users/aninrusimha/events{/privacy}",
"received_events_url": "https://api.github.com/users/aninrusimha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"That's an important bug and completely on me! Thanks a mille for spotting it @aninrusimha !",
"Fixed it - https://huggingface.co/facebook/opt-30b/discussions/1"
] | 1,653
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
Version 4.20.0.dev0, built from source
Issue is in https://huggingface.co/facebook/opt-30b/blob/main/tokenizer_config.json
```
### Who can help?
@patrickvonplaten has his name in the code :)
Discovered when testing the OPT models on various datasets.
https://huggingface.co/facebook/opt-30b/blob/main/tokenizer_config.json is missing a padding token
https://huggingface.co/facebook/opt-13b/blob/main/tokenizer_config.json looks to have the real config for opt-13b?
https://huggingface.co/facebook/opt-13b/blob/main/tokenizer_config.json
{"errors": "replace", "unk_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "add_bos_token": true, "special_tokens_map_file": null, "name_or_path": "patrickvonplaten/opt-30b", "tokenizer_class": "GPT2Tokenizer"}
https://huggingface.co/facebook/opt-30b/blob/main/tokenizer_config.json
{"errors": "replace", "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "special_tokens_map_file": null, "name_or_path": "patrickvonplaten/opt_gpt2_tokenizer", "tokenizer_class": "GPT2Tokenizer"}
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code snippet
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from datasets import load_dataset
dataset = load_dataset("wikitext", "wikitext-2-raw-v1", split="validation")
dataset = [s['text'] for s in dataset if s['text'] != '']
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
encoded = tokenizer(dataset,
return_tensors="pt",
padding=True)
```
You will see an error because the tokenizer lacks a padding token.
### Expected behavior
```shell
The OPT-30B tokenizer has a padding token and pads the input.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17425/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17425/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17424/events
|
https://github.com/huggingface/transformers/issues/17424
| 1,248,617,298
|
I_kwDOCUB6oc5KbGNS
| 17,424
|
Inconsistent behavior in generate when output_scores=True
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Great find @shijie-wu,\r\n\r\nWe've settled on outputting the processed scores since those are the ones that determine the next token, e.g. argmax and sample is taken on those scores. Given the name of the flag (`output_scores`), I think this makes the most sense. \r\nWill open a PR to fix `greedy_search` here. \r\n\r\nIt's a good point that people might need the \"raw scores\" though. I think it's sensible to output the output logits of the model in this case as this would be the most understandable & consistent across generation methods. E.g. every LM model outputs logits which is the \"rawest\" score, so I'd be fine with adding a `output_logits=True/False` flag for this. \r\nWhat do you think @patil-suraj @gante @shijie-wu ?",
"@patrickvonplaten regarding flag for the logits: on paper yes... but we are starting to get many boolean flags to control the output of internal variables (related issue: https://github.com/huggingface/transformers/issues/17016, where it is requested the output of past key values). I wonder whether there is a better way to collect and expose the internals of generate for advanced uses 🤔 ",
"For testing purpose, especially for PT/TF generation equivalence test, I think it would be better to be able to return the raw scores from the models --> so we can identify which parts get wrong if any test failure occur.\r\n(But I understand that we have a lot of flags in `generate` already.)",
"Having `output_logits=True/False` flag for raw logits sounds good. In terms of too many flags in `generate`, we could have something like `output_flags: Set[ModelInternal]=set([\"logits\", \"scores\"])`?",
"Just a general comment that may seem obvious to some but I feel like it's always good to restate common options when dealing with such issues (rampant too many options + enabling users to do powerful things),\r\nI don't intend to say that any idea should be applied, just those are my go to options when dealing with such issues, and might provide insights to you on how to deal with this ! \r\n\r\n#Idea number 1:\r\n - If you have too many arguments, usually some combinations do no make any sense. For instance here ( output_logits=True with output_scores=False, don't make any sense, you're not outputting scores so why `output_logits` value would be of any interest). Having invalid, bogus combinations is a great place for fusing two arguments into 1 that's an enum. For instance `output_scores: [\"none\", \"logits\", \"scores\"] (and keep False, None, True for BC) `. Now you can see that there's no way to express the previous bogus combination.\r\n \r\n #Idea number 2:\r\n - Grouping arguments is a good option too, since users are usually likely to touch more some arguments than others. Some users are really interested in looking at the scores, while some are much more interested in the generation options like `top_k` or `decoder_input_ids`. Having some form of groups makes things easier: `generate(input_ids, logits_options, model_options,return_options )`.It's super important to be sure that the groups are extremely clear (so users don't have to question where option X lives). Even better options for power users is exposing directly some objects like `LogitsProcessor` or `StoppingCriteria` (enables full freedom). \r\n \r\n #Idea number 3:\r\n \r\nIn general for power users wanting to access internals, I think, enabling tons of options to flag what needs to be outputted is just asking for general computing as parameters. Exposing the internals seem like a better option.\r\nFor instance one could add a `LogitsProcessor` so see the raw models scores (and at each step at that !) and manually save them himself. It **is** a bit of work, but then the user is empowered to save exactly what he wants without relying on our code to enable his option.\r\n\r\n\r\n#Idea number 4:\r\n\r\nIt's OK to say no, more is not always better.",
"Thank you for sharing the options! Option 3 seems to be the fastest way to enable returning raw logits without any code change. However, I just go though the relevant path. It seems the user provided `logits_processor` is appended to an new instance of `LogitsProcessorList`. As a result, user cannot get the raw logits using the current implementation even with a custom `LogitsProcessor`. I might be missing something. \r\n\r\nIMO, callbacks like custom `LogitsProcessor` seems to be the best way to enable advance usage while keeping the main `generate` code clean."
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
### System Info
main branch
### Who can help?
@patrickvonplaten, @Narsil, @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In `generate` when `output_scores=True`, the behavior is inconsistent. In `greedy_search` mode, the scores are raw logits https://github.com/huggingface/transformers/blob/740a1574f1d95fb81f063bdda9f4c27abea7f04b/src/transformers/generation_utils.py#L1690-L1695
but in `sample` mode (and various beam search modes), the scores are processed logits https://github.com/huggingface/transformers/blob/740a1574f1d95fb81f063bdda9f4c27abea7f04b/src/transformers/generation_utils.py#L1945-L1954
### Expected behavior
In `generate` when `output_scores=True`, the returned scores should be consistent. It could either be raw logits or the processed logits. While for my usecase, I only need raw logits. There might be some usecases which require the processed logits. So there're multiple options:
1. Return raw logits when `output_scores=True`
2. Return processed logits when `output_scores=True`
3. Return processed logits when `output_scores=True`, and raw logits when `output_raw_scores=True`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17424/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17423/events
|
https://github.com/huggingface/transformers/pull/17423
| 1,248,608,309
|
PR_kwDOCUB6oc44eLRr
| 17,423
|
Wav2vec2 finetuning shared file system
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
MEMBER
| null |
# What does this PR do?
Make wav2vec2 fine-tuning script more robust when dealing with multi-node / shared file systems (interesting edge case :sweat_smile: )
Fixes #17412
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17423/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17423/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17423",
"html_url": "https://github.com/huggingface/transformers/pull/17423",
"diff_url": "https://github.com/huggingface/transformers/pull/17423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17423.patch",
"merged_at": 1653509083000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17422
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17422/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17422/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17422/events
|
https://github.com/huggingface/transformers/issues/17422
| 1,248,585,326
|
I_kwDOCUB6oc5Ka-Zu
| 17,422
|
XGLM onnx support
|
{
"login": "FrankHeijden",
"id": 22407829,
"node_id": "MDQ6VXNlcjIyNDA3ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/22407829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrankHeijden",
"html_url": "https://github.com/FrankHeijden",
"followers_url": "https://api.github.com/users/FrankHeijden/followers",
"following_url": "https://api.github.com/users/FrankHeijden/following{/other_user}",
"gists_url": "https://api.github.com/users/FrankHeijden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrankHeijden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrankHeijden/subscriptions",
"organizations_url": "https://api.github.com/users/FrankHeijden/orgs",
"repos_url": "https://api.github.com/users/FrankHeijden/repos",
"events_url": "https://api.github.com/users/FrankHeijden/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrankHeijden/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @lewtun",
"Hey @FrankHeijden, indeed this architecture is not yet supported in the ONNX exporter. If you'd like to have a go at it yourself, you can follow [this guide](https://huggingface.co/docs/transformers/v4.19.2/en/serialization#exporting-a-model-for-an-unsupported-architecture) and use the `BartOnxxConfig` as a template to work from (I think this model should be similar)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### Feature request
I am trying to convert an XGLM based model using the following command, but am receiving an error that onnx does not support an XGLM based model:
```
./venv/bin/python -m transformers.onnx --model=facebook/incoder-1B onnx/
```
Error:
```
KeyError: "xglm is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'camembert', 'convbert', 'data2vec-text', 'deit', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'marian', 'mbart', 'mobilebert', 'm2m-100', 'roberta', 'roformer', 't5', 'vit', 'xlm-roberta'] are supported. If you want to support xglm please propose a PR or open up an issue."
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17422/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17421
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17421/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17421/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17421/events
|
https://github.com/huggingface/transformers/pull/17421
| 1,248,490,030
|
PR_kwDOCUB6oc44dxXG
| 17,421
|
Add link to Hub PR docs in model cards
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
MEMBER
| null |
# What does this PR do?
This PR update the model card guide to point to the new Hub PR feature. I couldn't find the docs on https://huggingface.co/docs/hub/main so decided to link to the raw file for now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17421/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17421",
"html_url": "https://github.com/huggingface/transformers/pull/17421",
"diff_url": "https://github.com/huggingface/transformers/pull/17421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17421.patch",
"merged_at": 1653503936000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17420
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17420/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17420/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17420/events
|
https://github.com/huggingface/transformers/pull/17420
| 1,248,482,482
|
PR_kwDOCUB6oc44dvnB
| 17,420
|
Add Gated-SiLU to T5
|
{
"login": "DanielHesslow",
"id": 9974388,
"node_id": "MDQ6VXNlcjk5NzQzODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielHesslow",
"html_url": "https://github.com/DanielHesslow",
"followers_url": "https://api.github.com/users/DanielHesslow/followers",
"following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions",
"organizations_url": "https://api.github.com/users/DanielHesslow/orgs",
"repos_url": "https://api.github.com/users/DanielHesslow/repos",
"events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielHesslow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm, yeah so I think I understand what you want to do, but I really don't understand why.\r\n\r\nfor back compatibility we want:\r\n- non-gated activation function if you specify `feed_forward_proj='relu'`\r\n- gated activation function if you specify `feed_forward_proj='gated-gelu'`\r\nBoth my current version and with your modifications this is handled correctly.\r\n\r\nFor new activation functions:\r\nmy version:\r\n- you get the activation function you specify in `feed_forward_proj`\r\n- you get a gated activation function if you specify `is_gated=True`\r\n\r\nyour version:\r\n- you always get a gated activation function if you specify `dense_act_fn`\r\n\r\nI don't see why it's better to 1. change the parameter name where you specify the activation function 2. not support new non-gated activation functions. \r\n",
"```\r\n- you always get a gated activation function if you specify dense_act_fn\r\n```\r\nno that was not the logic. The logic was to only get a \"gated\" feed forward when you specify `feed_forward_proj=\"gated-gelu\"`, but maybe that's too complicated then here actually. \r\n\r\nNew (better) idea maybe:\r\nHow about we just add a new `feed_forward_proj=\"gated-silu\"` and then you extract if the model should be gated or not with:\r\n```py\r\nis_gated = feed_forward_proj.split(\"-\")[0] == \"gated\"\r\n```\r\nand the activation function with:\r\n```py\r\nact_fn = feed_forward_proj.split(\"-\")[-1]\r\n```\r\n\r\nmaybe that's the cleanest actually",
"This way no dup code, no need for an additional config attribute and it's fairly clean",
"Alright, should be mostly in order now. I do agree that it's a bit cleaner to not introduce more new parameters.\r\n\r\nGetting a few (as far as I can tell) unrelated tests erroring out, with protobuff problems:\r\n\r\n```\r\nE TypeError: Descriptors cannot not be created directly.\r\nE If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.\r\nE If you cannot immediately regenerate your protos, some other possible workarounds are:\r\nE 1. Downgrade the protobuf package to 3.20.x or lower.\r\nE 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).\r\n```\r\n\r\nBut otherwise I guess things are ok. ",
"@DanielHesslow - great the solution works well for me - thanks for making the changes. Left 1 suggestion to improve the error message a bit, but besides that all good to me.",
"@DanielHesslow,\r\n\r\nThe failing CI tests are because of TF releasing a new protobuffer version which broke our CI. You could solve this by rebasing your branch to main (or just pull main into your branch).\r\n\r\nOnce your branch is up to date, the CI tests should work again :-)\r\n\r\n```\r\ngit pull origin main\r\ngit push\r\n```\r\n\r\nThanks a lot for your work here!",
"Okay, fixed the error message and rebased onto main, so all should be good now I believe. ",
"Merging now"
] | 1,653
| 1,654
| 1,654
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds gated SiLU to the t5 model in order to support recently the released UL2 model: https://github.com/google-research/google-research/tree/master/ul2
@patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17420/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17420/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17420",
"html_url": "https://github.com/huggingface/transformers/pull/17420",
"diff_url": "https://github.com/huggingface/transformers/pull/17420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17420.patch",
"merged_at": 1654246597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17419
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17419/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17419/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17419/events
|
https://github.com/huggingface/transformers/pull/17419
| 1,248,298,739
|
PR_kwDOCUB6oc44dMIb
| 17,419
|
fix link in performance docs
|
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
MEMBER
| null |
This PR fixes the link from `perf_train_gpu_single` to `perf_train_gpu_one ` in `performance.mdx` doc.
Thanks for reporting @stas00!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17419/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17419",
"html_url": "https://github.com/huggingface/transformers/pull/17419",
"diff_url": "https://github.com/huggingface/transformers/pull/17419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17419.patch",
"merged_at": 1653504883000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17418/events
|
https://github.com/huggingface/transformers/issues/17418
| 1,248,232,631
|
I_kwDOCUB6oc5KZoS3
| 17,418
|
DEIT -Some weights of the model checkpoint at facebook/deit-base-patch16-224 were not used when initializing DeiTMode
|
{
"login": "kanlions",
"id": 13251893,
"node_id": "MDQ6VXNlcjEzMjUxODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/13251893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kanlions",
"html_url": "https://github.com/kanlions",
"followers_url": "https://api.github.com/users/kanlions/followers",
"following_url": "https://api.github.com/users/kanlions/following{/other_user}",
"gists_url": "https://api.github.com/users/kanlions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kanlions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kanlions/subscriptions",
"organizations_url": "https://api.github.com/users/kanlions/orgs",
"repos_url": "https://api.github.com/users/kanlions/repos",
"events_url": "https://api.github.com/users/kanlions/events{/privacy}",
"received_events_url": "https://api.github.com/users/kanlions/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThat's because the checkpoint you are loading ([facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224)) needs to be loaded in a `ViTModel`/`ViTForImageClassification` rather than a `DeiTModel`. As explained in the [docs](https://huggingface.co/docs/transformers/model_doc/deit) of DeiT, the authors also trained more efficient ViT models:\r\n\r\n> The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel](https://huggingface.co/docs/transformers/v4.19.2/en/model_doc/vit#transformers.ViTModel) or [ViTForImageClassification](https://huggingface.co/docs/transformers/v4.19.2/en/model_doc/vit#transformers.ViTForImageClassification). Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should use [DeiTFeatureExtractor](https://huggingface.co/docs/transformers/v4.19.2/en/model_doc/deit#transformers.DeiTFeatureExtractor) in order to prepare images for the model.",
"@NielsRogge \r\n\r\nThank you very much for responding and I have got my mistake but still I have a confusion. Because Initially I started with 'facebook/deit-base-distilled-patch16-224' from the tutorial mentioned\r\nhttps://huggingface.co/docs/transformers/v4.19.2/en/model_doc/deit#transformers.DeiTFeatureExtractor\r\n\r\nStill I get this issue. I understand I am not doing any classification so I get some warning but still I am not able to comprehend this. Any direction will be appreciated. Thanks in advance\r\n\r\n\r\nSome weights of the model checkpoint at facebook/deit-base-distilled-patch16-224 were not used when initializing DeiTModel: ['distillation_classifier.bias', 'cls_classifier.weight', 'distillation_classifier.weight', 'cls_classifier.bias']\r\n- This IS expected if you are initializing DeiTModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing DeiTModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of DeiTModel were not initialized from the model checkpoint at facebook/deit-base-distilled-patch16-224 and are newly initialized: ['deit.pooler.dense.weight', 'deit.pooler.dense.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nWhen initializing a `DeiTModel`, it won't include the heads on top. For that, you'll need to instantiate a `DeiTForImageClassification` or `DeiTForImageClassificationWithTeacher` model."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
Nvidia 3080
Windows 11
```
### Who can help?
@NielsRogge
The warning I get is this and a big list of layers
You are using a model of type vit to instantiate a model of type deit. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at facebook/deit-base-patch16-224 were not used when initializing DeiTModel:
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I did the following
from transformers import DeiTFeatureExtractor, DeiTModel
feature_extractor = DeiTFeatureExtractor.from_pretrained("facebook/deit-base-patch16-224")
model = DeiTModel.from_pretrained("facebook/deit-base-patch16-224")
inputs_ref = feature_extractor(images=im_ref, return_tensors="pt")
with torch.no_grad():
outputs_ref = model(**inputs_ref)
last_hidden_states_ref = outputs_ref.last_hidden_state
The warning I get is this and a big list of layers
You are using a model of type vit to instantiate a model of type deit. This is not supported for all configurations of models and can yield errors.
Some weights of the model checkpoint at facebook/deit-base-patch16-224 were not used when initializing DeiTModel:
The images are simple scenes only.
### Expected behavior
```shell
Is this warning something I should take seriously. I need only weights from a pre trained model to use.
The warning is 'You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference'
I am new to Hugging face community and I appreciate any help. I tried to follow the guidelines, please revert to me in case of more information
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17418/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17417/events
|
https://github.com/huggingface/transformers/pull/17417
| 1,248,192,563
|
PR_kwDOCUB6oc44c16T
| 17,417
|
Use latest stable PyTorch/DeepSpeed for Push & Scheduled CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@stas00 \r\n\r\nRegarding `transformers-pytorch-gpu`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/7e46ec71371b5e705522821e741d7c0dac910859/docker/transformers-pytorch-gpu/Dockerfile#L17\r\n\r\nHere we only upgrade `torch`, but not `torchvision` and `torcuaudio`. Those two are installed in a previous step\r\n\r\nhttps://github.com/huggingface/transformers/blob/7e46ec71371b5e705522821e741d7c0dac910859/docker/transformers-pytorch-gpu/Dockerfile#L12\r\n\r\nI am not sure if we need to upgrade all 3 modules at the same time. PyTorch installation instructions always install these 3 at the same time, so I guess yes ??\r\n",
"Yes, usually it's the easiest to always handle all 3 packages as a single package to avoid incompatibility conflicts down the road. \r\n\r\nI first tried to \"optimize\" and only install the other 2 when it was needed, but later I switched to always installing the 3 together. the other 2 are tiny compared to the main package.",
"> Yes, usually it's the easiest to always handle all 3 packages as a single package to avoid incompatibility conflicts down the road.\r\n> \r\n> I first tried to \"optimize\" and only install the other 2 when it was needed, but later I switched to always installing the 3 together. the other 2 are tiny compared to the main package.\r\n\r\nOK, I will try to get it done. A bit not easy as here we have using an argument `$PYTORCH` so we can specify torch version.\r\n(although we only use the default value to get the latest stable version for now.)",
"[Just FYI] an update:\r\n\r\nI am running the tests before merge. So far only a subset of tests is run. I got some issues\r\n\r\nPyTorch pipelines (single-gpu)]\r\nhttps://github.com/huggingface/transformers/runs/6728867682?check_suite_focus=true\r\n\r\nTorch CUDA test (multi GPUs)\r\nhttps://github.com/huggingface/transformers/runs/6728868779?check_suite_focus=true\r\n\r\nI will try to re-run them later, also will wait the scheduled CIs during this weekend in order to compare.\r\n",
"Feel free to also spawn dummy machines to help you out if it helps getting it merged quicker",
"I am going to merge now.\r\n\r\n----------------------------------\r\n\r\n@stas00 \r\n\r\n- `intel_extension_for_pytorch` is added\r\n - the version still uses the approach so far `$(python3 -c \"from torch import version; print(version.__version__.split('+')[0])\")` \r\n - I have to remove `pip uninstall -y` that was in your original patch. The whole line wasn't working.\r\n\r\nFor the versions, let's discuss in #17586\r\n\r\n----------------------------------\r\n@stas00\r\n\r\nI observed with the latest stable PyTorch/DeepSpeed, `test_can_resume_training_normal_zero2_fp16` takes quite long to run the first time: (On push CI, it will timeout)\r\n\r\nFirst Run\r\n```\r\n63.46s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16\r\n10.97s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16\r\n```\r\nSecond Run\r\n```\r\n18.17s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16\r\n11.01s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16\r\n```\r\n\r\nWith previous setting (PyTorch 1.9 + DeepSpeed Recompiled)\r\n```\r\n6.16s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16\r\n2.83s call tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16\r\n```\r\n",
"> I observed with the latest stable PyTorch/DeepSpeed, test_can_resume_training_normal_zero2_fp16 takes quite long to run the first time: (On push CI, it will timeout)\r\n\r\nThe very first deepspeed test using deepspeed JIT install will have the overhead of building deepspeed, which takes about 1min - depending on the hardware.\r\n\r\nThis doesn't happen if deepspeed was prebuilt before tests were run.\r\n\r\nIs it possible that this test happens to be the first one to run?",
"@stas00 \r\n\r\n(This is for the stable release of `DeepSpeed` + `PyTorch`)\r\n\r\nHere is the job run page\r\nhttps://github.com/huggingface/transformers/runs/6761224960?check_suite_focus=true\r\n\r\nThe tests are ran in the following order \r\n\r\n```\r\n# The following 3 are OK\r\ntests/deepspeed/test_deepspeed.py::CoreIntegrationDeepSpeed::test_init_zero3_fp16 \r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_errors_zero2_fp16 \r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_errors_zero3_fp16 \r\n\r\n# This one timed out\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_fp16\r\n\r\n# This one is OK\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero3_fp16 \r\n\r\ntests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_config_object \r\n...\r\n```\r\n\r\nDumb question: Is stable release of `DeepSpeed` == pre-built ?",
"You can see that it was indeed building deepspeed during that test's run, see: https://github.com/huggingface/transformers/runs/6761224960?check_suite_focus=true#step:6:334\r\n\r\nso need to either have a longer timeout or always prebuild deepspeed.\r\n\r\n> Dumb question: Is stable release of DeepSpeed == pre-built ?\r\n\r\nHope the following makes the whole situation loud and clear:\r\n\r\n### What is being built:\r\n\r\n* stable release install: `pip install deepspeed==0.6.5`\r\n* bleed/master install: `pip install git+https://github.com/microsoft/DeepSpeed` (or `git clone ...; pip install -e .`)\r\n\r\n### How is it being built:\r\n\r\n* `pip install deepspeed` JIT build - this will build deepspeed the first time it's used. pip just installs the source files here.\r\n* `DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install -e . --global-option=\"build_ext\" --global-option=\"-j8\"` - this is the prebuiding - so that the first time it's used it's already ready to use",
"Thank you @stas00 , thankfully I get better understanding of the terminology now!\r\n\r\nI will pre-build `DeepSpeed` so it will be indeed ready to be speedy!"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
(**As there are a few PRs waiting for review, feel free to postpone the review for this PR if things get confusing at this moment**)
Currently:
- scheduled CI uses latest stable PyTorch (OK) + nightly DeepSpeed (Not OK)
- push CI uses PyTorch 1.9 (Not OK) + latest stable DeepSpeed (OK)
This PR fix it by using latest stable PyTorch + DeepSpeed, for both push/scheduled CIs
(**Let me run a dummy test before merge** 🙏 )
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17417/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17417",
"html_url": "https://github.com/huggingface/transformers/pull/17417",
"diff_url": "https://github.com/huggingface/transformers/pull/17417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17417.patch",
"merged_at": 1654595585000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17416/events
|
https://github.com/huggingface/transformers/pull/17416
| 1,248,104,062
|
PR_kwDOCUB6oc44ci7Z
| 17,416
|
Update AutoTokenizer.from_pretrained documentation examples
|
{
"login": "c00k1ez",
"id": 16941854,
"node_id": "MDQ6VXNlcjE2OTQxODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/16941854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c00k1ez",
"html_url": "https://github.com/c00k1ez",
"followers_url": "https://api.github.com/users/c00k1ez/followers",
"following_url": "https://api.github.com/users/c00k1ez/following{/other_user}",
"gists_url": "https://api.github.com/users/c00k1ez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c00k1ez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c00k1ez/subscriptions",
"organizations_url": "https://api.github.com/users/c00k1ez/orgs",
"repos_url": "https://api.github.com/users/c00k1ez/repos",
"events_url": "https://api.github.com/users/c00k1ez/events{/privacy}",
"received_events_url": "https://api.github.com/users/c00k1ez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17391
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SaulLu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17416/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17416/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17416",
"html_url": "https://github.com/huggingface/transformers/pull/17416",
"diff_url": "https://github.com/huggingface/transformers/pull/17416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17416.patch",
"merged_at": 1653492950000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17415/events
|
https://github.com/huggingface/transformers/pull/17415
| 1,248,097,055
|
PR_kwDOCUB6oc44chZD
| 17,415
|
Fix a typo in `Trainer` (remove parenthesis)
|
{
"login": "mikcnt",
"id": 11929535,
"node_id": "MDQ6VXNlcjExOTI5NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11929535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikcnt",
"html_url": "https://github.com/mikcnt",
"followers_url": "https://api.github.com/users/mikcnt/followers",
"following_url": "https://api.github.com/users/mikcnt/following{/other_user}",
"gists_url": "https://api.github.com/users/mikcnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikcnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikcnt/subscriptions",
"organizations_url": "https://api.github.com/users/mikcnt/orgs",
"repos_url": "https://api.github.com/users/mikcnt/repos",
"events_url": "https://api.github.com/users/mikcnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikcnt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17415/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17415",
"html_url": "https://github.com/huggingface/transformers/pull/17415",
"diff_url": "https://github.com/huggingface/transformers/pull/17415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17415.patch",
"merged_at": 1653981692000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17414/events
|
https://github.com/huggingface/transformers/issues/17414
| 1,248,085,138
|
I_kwDOCUB6oc5KZESS
| 17,414
|
Different behaviours for `tf/flax` and `pt` on `generate(max_length = len of input id)`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@ArthurZucker small tip, it's nicer to show code + error message as follows:\r\n\r\n```py\r\n>>> from transformers import GPT2LMHeadModel, TFGPT2LMHeadModel, GPT2Tokenizer\r\n\r\n>>> pt_model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n>>> tf_model = TFGPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\n>>> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\n>>> text = \"Today is a beautiful day and I want to thank\"\r\n\r\n>>> pt_input_ids = tokenizer(text,return_tensors = 'pt').input_ids\r\n>>> tf_input_ids = tokenizer(text,return_tensors = 'tf').input_ids\r\n\r\n>>> pt_output = model.generate(pt_input_ids,max_length = 10)\r\n```\r\n```\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nInput length of input_ids is 10, but ``max_length`` is set to 10. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.\r\n```\r\n```py\r\n>>> tokenizer.batch_decode(pt_output,skip_special_tokens = True)\r\n```\r\n```\r\n[\"Today is a beautiful day and I want to thank everyone\"]\r\n```\r\n```py\r\n>>> tf_output = model.generate(tf_input_ids,max_length = 10)\r\n```\r\n```\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"transformers/src/transformers/generation_tf_utils.py\", line 569, in generate\r\n return self._generate(\r\n File \"transformers/src/transformers/generation_tf_utils.py\", line 1543, in _generate\r\n raise ValueError(\r\nValueError: The context has 10 number of tokens, but `max_length` is only 10. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`\r\n```\r\n",
"cc @patil-suraj @gante @Narsil \r\n\r\nThink we should discuss on how to correct this. Currently we see a different behavior between PyTorch and Tensorflow.\r\n\r\n@patil-suraj @gante any ideas on how to go about it (without breaking backwards comp?)",
"Hi there! [Tensorflow](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_tf_utils.py#L1980) (and [FLAX](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_flax_utils.py#L439)) have to pre-allocate the output arrays with length=`max_length`, to be compatible with XLA. That implies that even if the exception above didn't exist, it would fail when writing into these arrays if we allow it to enter the generation loop.\r\n\r\nI see three options:\r\n1. Do nothing: annoying because of the different outputs;\r\n2. Upgrade the severity of PT from a warning to an exception: annoying because API/pipeline users might start getting exceptions where things were running before;\r\n3. On the three frameworks, return the first `max_length` tokens when the input is longer than `max_length`: not fully backward compatible, but probably the most correct exception-free behavior, as the request is for a output with length=`max_length`.\r\n\r\nWDYT?",
"Think we can follow 2. and go from warning to deprecation warning stating that it'll lead to an error in future versions (it is indeed a weird PT behavior). @patil-suraj what do you think?",
"> Upgrade the severity of PT from a warning to an exception: annoying because API/pipeline users might start getting exceptions where things were running before;\r\n\r\n`max_length` is impossible to use for pipelines users because there is no way they can know how many tokens are being used by their `string`. The option that's actually controllable is `max_new_tokens` since it means the same thing in both the case of `encoder-decoder` and `decoder-only`, AND it means something similar for all models (ByT5 does require more new tokens than GPT2 for same string length but at least it does not depend on the string users send).\r\n\r\nMaybe `pipeline` could absorb the cost if it makes more sense (personally I think `max_length` is always hard to deal with in `generate` but it has been here for a long time, so probably not going away It's just that `max_length = max_new_tokens_length + input_ids.shape[0] if decoder_only else max_length = max_new_tokens + decoder_start_ids` and that knowledge is not trivial to understand)\r\n\r\nFor instance `gpt2` has max_length =50 which is quite small compared to its `512` capacity: https://huggingface.co/gpt2/blob/main/config.json. So enforcing an error is likely to trigger issues (making `max_length=512` is not ideal either tbh).\r\n\r\n\r\ntldr; I would like to propose option 4:\r\n\r\n4- Move away from `max_length` and towards `max_new_tokens` (`max_length` can take precendence because of BC). That makes arguments orthogonal in terms of catching exceptions. `max_new_tokens<=0` raises exception regardless of input. Allocation can still be done correctly See `max_length` calculation from `input_ids` or `deocder_input_ids`.\r\nBasically we discard the entire class of error since now the arguments don't depend on each other like `max_length` and `input_ids` do. We can still raise an warning if `max_new_tokens + input_ids.shape[0] > max_model_length` and simply truncate the command.",
"Thanks a lot for all the important background info here @Narsil ! \r\n\r\nThink it'll be impossible to replace `max_length` with `max_new_tokens` and change the `max_length` default (it's been 20 since a long time in `configuration_utils.py` and changing any of this behavior would be a massive backward breaking behavior). \r\n\r\nHowever, I think what we could do is to give `max_new_tokens` priority over `max_length` if it's passed. Maybe we could then add a safeguard in pipelines that checks if `input_ids.shape[0] >= max_length` then `max_new_tokens = input_ids.shape[0] + 1` is passed with a warning and then `max_new_tokens` is being given priority over `max_length` (maybe we need to do this also slowly with warning and then change).\r\n\r\nHowever, I feel like this should be handled by the pipelines. What do you think @Narsil ?\r\n\r\nI very much don't like how it's currently done in PyTorch's `generate`, which silently adds +1 to `max_length` instead of throwing an error -> it should really throw an error IMO. So I'd like to escalate this to a nice error message sometime soon (the latest in v5) (option 2. of @gante)\r\n\r\nWhat do you think @gante @Narsil @patil-suraj ",
"> However, I feel like this should be handled by the pipelines. What do you think @Narsil ?\r\n\r\nAs I said, yes `pipeline` can very much swallow the difference. It's really important for pipeline because users don't even have access to their current length. \r\n\r\n> Think it'll be impossible to replace max_length with max_new_tokens and change the max_length default (it's been 20 since a long time in configuration_utils.py and changing any of this behavior would be a massive backward breaking behavior).\r\n\r\nI know, we can't break BC, I just want to emphasize that some pipeline usage would break (and possibly `generate` usage too) instead of working if we upgrade that to a hard error.\r\n\r\nI was merely trying to point out another option which was trying to avoid hard errors at all.\r\n\r\nIf you're OK with moving on hard error, I will make the necessary adjustements on `pipeline` (which might be already done actually I don't remember)",
"It's an important error, maybe let's jump on a quick offline call for it. @patil-suraj @gante - also curious to hear your thoughts on it here",
"> However, I think what we could do is to give `max_new_tokens` priority over `max_length` if it's passed.\r\n\r\nI like this idea! There would be no change for existing users, and new users could benefit from a clearer argument. However, we should be aware that `max_new_tokens` can be sneaky and result in more tokens than what the model can handle if `input_ids` is large enough -- we should build proper exceptions from the start.\r\n\r\nAs for the original issue (`input_ids.shape[0] >= max_length`), we can start with a warning and then move it into an exception, pointing at the `max_new_tokens` argument.",
"Just had a call with @Narsil . \r\n\r\nWe agreed on the following:\r\n\r\n- 1. TF, Flax, PT should have the same behavior regarding `max_length` and `max_new_tokens`\r\n- 2. `max_new_tokens` should be favored over `max_length` meaning that if both are provided then `max_new_tokens` should be used\r\n- 3. PT should escalate the warning when `input_ids.shape[0] >= max_length` to a `\"This will lead to an error in a future version\"` warning\r\n- 4. generation pipelines should absorb the case when `input_ids.shape[0]` >= max_length` by just passing `max\r\n\r\nWhat do you think @gante @patil-suraj - good for you?\r\n\r\nThis leads to a couple of PRs we should open:\r\n\r\n1. @gante could you maybe add `max_new_tokens` logic to Flax and TF generate?\r\n2. Escalate the warning in PT and add a deprecation warning -> @gante could you maybe also open a PR for this?\r\n3. I could take care of changing the docs to advertise `max_new_tokens` instead of `max_length`.\r\n4. @Narsil could you maybe make sure that the PT pipelines correctly absorb the use case when `input_ids.shape[0] >= max_length`?\r\n\r\nWould this work for you?",
"> @Narsil could you maybe make sure that the PT pipelines correctly absorb the use case when input_ids.shape[0] >= max_length?\r\n\r\nYes !",
"Sounds good 👍 ",
"Cool! I'll take care of the docs then :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, is this issue still exists guys? ",
"With the recent merge of #18018, it should not exist anymore. 😉 ",
"Let's close the issue then,"
] | 1,653
| 1,665
| 1,665
|
COLLABORATOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.13.0.dev20220521 (False)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply try the `generate` function in the pytorch model and the the `tf` model. Stumbled up upon that issue when I was working on the OPT model tests. In pytorch, even if the `max_length` argument is smaller than the length of the input sequence, a token is still generated.
The following example using GPT2 is quite clear. The source of the bug is from [generation_utils](https://github.com/huggingface/transformers/blob/56b35ce3ebeb1edb53ef98b3ad3557f79ce788e2/src/transformers/generation_utils.py#L1217) which only throws a warning in `pytorch` while throwing an error in both Flax and TF.
Not sure how this should be approached, but IMO we should probably adapt the `tf/flax` code to throw the same warning (if it generates a single token like in pytorch).
```python
>>> from transformers import GPT2LMHeadModel, TFGPT2LMHeadModel, GPT2Tokenizer
>>> pt_model = GPT2LMHeadModel.from_pretrained('gpt2')
>>> tf_model = TFGPT2LMHeadModel.from_pretrained('gpt2')
>>> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
>>> text = "Today is a beautiful day and I want to thank"
>>> pt_input_ids = tokenizer(text,return_tensors = 'pt').input_ids
>>> tf_input_ids = tokenizer(text,return_tensors = 'tf').input_ids
>>> pt_output = model.generate(pt_input_ids,max_length = 10)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Input length of input_ids is 10, but ``max_length`` is set to 10. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.
>>> tokenizer.batch_decode(pt_output,skip_special_tokens = True)
["Today is a beautiful day and I want to thank everyone"]
>>> tf_output = model.generate(tf_input_ids,max_length = 10)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "transformers/src/transformers/generation_tf_utils.py", line 569, in generate
return self._generate(
File "transformers/src/transformers/generation_tf_utils.py", line 1543, in _generate
raise ValueError(
ValueError: The context has 10 number of tokens, but `max_length` is only 10. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`
```
### Expected behavior
```shell
Input length of input_ids is 10, but ``max_length`` is set to 10. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17414/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17413/events
|
https://github.com/huggingface/transformers/pull/17413
| 1,247,973,493
|
PR_kwDOCUB6oc44cGuj
| 17,413
|
[WIP]Add splinter test tokenization file
|
{
"login": "farahdian",
"id": 72678356,
"node_id": "MDQ6VXNlcjcyNjc4MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/72678356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farahdian",
"html_url": "https://github.com/farahdian",
"followers_url": "https://api.github.com/users/farahdian/followers",
"following_url": "https://api.github.com/users/farahdian/following{/other_user}",
"gists_url": "https://api.github.com/users/farahdian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farahdian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farahdian/subscriptions",
"organizations_url": "https://api.github.com/users/farahdian/orgs",
"repos_url": "https://api.github.com/users/farahdian/repos",
"events_url": "https://api.github.com/users/farahdian/events{/privacy}",
"received_events_url": "https://api.github.com/users/farahdian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @farahdian , thank you very much for your contribution. I see that several tests have failed, is this still a work in progress? ",
"> Hi @farahdian , thank you very much for your contribution. I see that several tests have failed, is this still a work in progress?\r\n\r\nYup a work in progress, but will appreciate some guidance and will be inspecting the failed tests. Sorry for any confusion!",
"Ok top! I'd be happy to give you a hand. \r\n\r\nI think in your case it would be great if the title of the PR started with `[WIP]` and a first failing test that you can fix is style of the files by running the `make fixup` command locally (cf the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests)).\r\n\r\nFor the rest of the tests that fail, could you tell me more about what is obscure for you?",
"Many thanks. I've tried to run ```make fixup``` but this error keeps coming up:\r\n\r\n```\r\nmake : The term 'make' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.\r\nAt line:1 char:1\r\n+ make fixup\r\n+ ~~~~\r\n + CategoryInfo : ObjectNotFound: (make:String) [], CommandNotFoundException\r\n + FullyQualifiedErrorId : CommandNotFoundException\r\n```\r\n\r\nI've tried using ```python -m``` alongside but then it returns this:\r\n```\r\nusage: __main__.py [-h] {project,get} ...\r\n__main__.py: error: invalid choice: 'fixup' (choose from 'project', 'get')\r\n```\r\n\r\nThink this may be related to why I've been having some issues running tests locally... appreciate you having a look!",
"From your error message, what I understand is that you don't have the `make` command installed on your computer. \r\n\r\n([source](https://www.computerhope.com/unix/umake.htm))\r\n> On Unix-like operating systems, make is a utility for building and maintaining groups of programs (and other types of files) from source code.\r\n\r\nDepending on your OS, you'll probably have an alternative to install it. For example on [Windows you can use WSL.](https://github.com/Microsoft/WSL/issues/2073)",
"Hi @farahdian ,\r\n\r\nJust a quick message to see how you're doing with adding the tests on your end. :relaxed: ",
"> Hi @farahdian ,\r\n> \r\n> Just a quick message to see how you're doing with adding the tests on your end. ☺️\r\n\r\nThanks for checking up on me! \r\n\r\nI'm struggling a bit with this and I'm not sure how to proceed... I've been trying to use WSL and it seems like I'm coming close but this error appears.\r\n\r\n```make: *** No rule to make target 'fixup'. Stop.```",
"Hi @farahdian ,\r\n\r\nThanks for the update. What is your working directory when you run the `make fixup` command?",
"> Hi @farahdian ,\r\n> \r\n> Thanks for the update. What is your working directory when you run the `make fixup` command?\r\n\r\n```transformers/tests/splinter```",
"I see, you need to run it from the root repository `transformers/` where the [Makefile](https://github.com/huggingface/transformers/blob/main/Makefile) lives :blush: ",
"> I see, you need to run it from the root repository `transformers/` where the [Makefile](https://github.com/huggingface/transformers/blob/main/Makefile) lives 😊\r\n\r\nUnfortunately, when I run '''make fixup''' from the root repo this error then comes up:\r\n```\r\nmake: python: Command not found\r\nNo library .py files were modified\r\npython utils/custom_init_isort.py\r\nmake: python: Command not found\r\nmake: *** [Makefile:56: extra_style_checks] Error 127\r\n```",
"This error suggests that `python` isn't installed. I guess that you'll get the same error if you run `python --version` (which isn't specific to `transformers`)",
"Hi @farahdian, \r\n\r\nHow things are going for you? :slightly_smiling_face: ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for the delay, think I reached beyond my capabilities with this one. Hope this can be passed on to another contributer",
"Thank you for keeping us informed"
] | 1,653
| 1,664
| 1,664
|
NONE
| null |
# What does this PR do?
This PR adds test tokenization file for Splinter. It inherits from BERT Tokenizer.
Contributes fixes to issue https://github.com/huggingface/transformers/issues/16627
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
@SaulLu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17413/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17413",
"html_url": "https://github.com/huggingface/transformers/pull/17413",
"diff_url": "https://github.com/huggingface/transformers/pull/17413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17413.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17412/events
|
https://github.com/huggingface/transformers/issues/17412
| 1,247,910,740
|
I_kwDOCUB6oc5KYZtU
| 17,412
|
wav2vec2 multi-node training problems in a shared file system
|
{
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"When instantiating a metric in a distributed setup, you need to specify it:\r\n```python\r\nload_metric(..., process_id=rank, num_process=total_world_size)\r\n```\r\nthis way there will be no collision on the files used to store the predictions and references used to compute the metric.\r\n\r\n(and this makes me think one should rename `num_process` to `num_processes` or something like that)",
"Thank you for the quick responses and fixes.\r\n\r\nAlthough you have closed the issue, I want to document the results. With the changes I did in the `load_metric` the processes errored out with the message:\r\n\r\n```\r\nValueError: Error in _init_writer: another metric instance is already using the local cache file at /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/default_experiment-8-rdv.lock. Please specify an experiment_id (currently: default_experiment) to avoid collision between distributed metric instances.\r\n```\r\nI understand that `experiment_id` is a string and it can be anything. I just made the change but I won't get the results for a while since I am in the job queue.",
"Should we maybe move this (still open) issue to datasets @gullabi @lhoestq ?",
"It might be a good idea but let me check the results of the experiment. If it is working as intended there is no reason to move it I think. But if we notice any problem, I will let you know and we can move the discussion to datasets. thanks!",
"So I am back, with more problems. Now might be a good idea to move the issue to datasets. I am still running into problems. I am continuing here but you will let me know if I need to do something.\r\n\r\nHere are the changes I did to the `run_speech_recognition_ctc.py`\r\n```\r\n process_id=int(os.environ[\"RANK\"])\r\n num_process=int(os.environ[\"WORLD_SIZE\"])\r\n eval_metrics = {metric: load_metric(metric,\r\n process_id=process_id,\r\n num_process=num_process,\r\n experiment_id=\"slurm\")\r\n for metric in data_args.eval_metrics}\r\n```\r\nFor the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files \r\n```\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 841, in <module>\r\n main()\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 792, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py\", line 1497, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py\", line 1624, in _maybe_log_save_evaluate\r\n metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py\", line 2291, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py\", line 2535, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 742, in compute_metrics\r\n metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 742, in <dictcomp>\r\n metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py\", line 419, in compute\r\n self.add_batch(**inputs)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py\", line 465, in add_batch\r\n self._init_writer()\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py\", line 552, in _init_writer\r\n self._check_rendez_vous() # wait for master to be ready and to let everyone go\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py\", line 342, in _check_rendez_vous\r\n ) from None\r\nValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.\r\n```\r\n\r\nWhen I look at the cache directory, I can see all the lock files in principle:\r\n```\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock\r\n/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock\r\n```\r\nI appreciate any help regarding this, thanks! @lhoestq ",
"After fixing the parts related to `datasets` we ran into another problem with the `run_speech_recognition_ctc.py` script. Maybe a question for @patrickvonplaten sorry to bother you with a mention. \r\n\r\nWhen we increase the number of nodes, we are getting a `JSONDecodeError` for `preprocessor_config.json`. Checking the file, we see that it is fine. We suspect that the nodes are trying to read a file that is currently being written. In order to solve the problem we put `local=False` to the `main_process_first` contexts, but it didn't help. We are putting the snippet which we think is causing the problem, plus the logs in the end\r\n\r\n```\r\n with training_args.main_process_first(local=False, desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n\r\n def is_audio_in_length_range(length):\r\n return length > min_input_length and length < max_input_length\r\n\r\n # filter data that is shorter than min_input_length\r\n vectorized_datasets = vectorized_datasets.filter(\r\n is_audio_in_length_range,\r\n num_proc=num_workers,\r\n input_columns=[\"input_length\"],\r\n )\r\n```\r\nand the log (sorry it's a jumbled mess since many nodes are writing at the same time):\r\n```\r\n^Mpreprocess datasets #11: 0%| | 0/12020 [00:00<?, ?ex/s]ESC[AESC[AESC[AESC[AESC[AESC[AESC[AESC[AESC[AESC[AESC[AFeature extractor saved in wav2vec2-xls-r-300m-ca_new/preprocessor_con\r\nfig.json\r\nSpecial tokens file saved in wav2vec2-xls-r-300m-ca_new/special_tokens_map.json\r\nadded tokens file saved in wav2vec2-xls-r-300m-ca_new/added_tokens.json\r\ntokenizer config file saved in wav2vec2-xls-r-300m-ca_new/tokenizer_config.json\r\nSpecial tokens file saved in wav2vec2-xls-r-300m-ca_new/special_tokens_map.json\r\nConfiguration saved in wav2vec2-xls-r-300m-ca_new/config.json\r\n/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py:761: FutureWarning: Loading a processor from a feature extractor config that does not include a `processo\r\nr_class` attribute is deprecated and will be removed in v5. Please add the following attribute to your `preprocessor_config.json` file to suppress this warning: `'processor_class': 'Wav2Vec2P\r\nrocessor'`\r\n FutureWarning,\r\n/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py:761: FutureWarning: Loading a processor from a feature extractor config that does not include a `processo\r\nr_class` attribute is deprecated and will be removed in v5. Please add the following attribute to your `preprocessor_config.json` file to suppress this warning: `'processor_class': 'Wav2Vec2P\r\nrocessor'`\r\n FutureWarning,\r\n/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py:58: FutureWarning: Loading a tokenizer inside\r\n Wav2Vec2Processor from a config that does not include a `tokenizer_class` attribute is deprecated and will be removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'` attribute to\r\n either your `config.json` or `tokenizer_config.json` file to suppress this warning: \r\n FutureWarning,\r\n/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py:58: FutureWarning: Loading a tokenizer inside\r\n Wav2Vec2Processor from a config that does not include a `tokenizer_class` attribute is deprecated and will be removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'` attribute to\r\n either your `config.json` or `tokenizer_config.json` file to suppress this warning: \r\n FutureWarning,\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 454, in get_feature_extractor_dict\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 454, in get_feature_extractor_dict\r\nFeature extractor saved in wav2vec2-xls-r-300m-ca_new/preprocessor_config.json\r\n feature_extractor_dict = json.loads(text) \r\nfeature_extractor_dict = json.loads(text) File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/__init__.py\", line 348, in loads\r\n\r\n File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/__init__.py\", line 348, in loads\r\n return _default_decoder.decode(s)\r\n File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/decoder.py\", line 337, in decode\r\n return _default_decoder.decode(s)\r\n File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/decoder.py\", line 337, in decode\r\nadded tokens file saved in wav2vec2-xls-r-300m-ca_new/added_tokens.json\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/decoder.py\", line 355, in raw_decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/apps/PYTHON/3.7.4/INTEL/lib/python3.7/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 841, in <module>\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 841, in <module>\r\n^Mpreprocess datasets #5: 3%|▎ | 414/12021 [00:01<00:23, 495.45ex/s]ESC[AESC[AESC[AESC[AESC[A\r\n\r\n main()\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 763, in main\r\n main()\r\n File \"/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py\", line 763, in main\r\n processor = Wav2Vec2Processor.from_pretrained(training_args.output_dir)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py\", line 61, in from_pretrained\r\n processor = Wav2Vec2Processor.from_pretrained(training_args.output_dir)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py\", line 61, in from_pretrained\r\n feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 308, in from_pretrained\r\n feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 308, in from_pretrained\r\n feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 458, in get_feature_extractor_dict\r\n feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/feature_extraction_utils.py\", line 458, in get_feature_extractor_dict\r\n f\"It looks like the config file at '{resolved_feature_extractor_file}' is not a valid JSON file.\"\r\nOSError: It looks like the config file at 'wav2vec2-xls-r-300m-ca_new/preprocessor_config.json' is not a valid JSON file.\r\ntokenizer config file saved in wav2vec2-xls-r-300m-ca_new/tokenizer_config.json\r\n f\"It looks like the config file at '{resolved_feature_extractor_file}' is not a valid JSON file.\"\r\nOSError: It looks like the config file at 'wav2vec2-xls-r-300m-ca_new/preprocessor_config.json' is not a valid JSON file.\r\nFeature extractor saved in wav2vec2-xls-r-300m-ca_new/preprocessor_config.json\r\nConfiguration saved in wav2vec2-xls-r-300m-ca_new/config.json\r\nSpecial tokens file saved in wav2vec2-xls-r-300m-ca_new/special_tokens_map.json\r\ntokenizer config file saved in wav2vec2-xls-r-300m-ca_new/tokenizer_config.json\r\nadded tokens file saved in wav2vec2-xls-r-300m-ca_new/added_tokens.json\r\nloading feature extractor configuration file wav2vec2-xls-r-300m-ca_new/preprocessor_config.json\r\nloading configuration file wav2vec2-xls-r-300m-ca_new/config.json\r\n```",
"Hmm, the file should not be written - I guess what might happen here is that one node is much much faster then another node and already wants read a file that has not been created in the previous step yet. \r\n\r\nCould you do the following:\r\n- Simply run the script on one node to correctly write the tokenizer and feature processor config jsons\r\n- Then pass all those files to the script on multi-node so that the in multi-node no config is being written at all? ",
"Thanks Patrick. For the second step, do I need to skip the data preprocessing also? If so is there a way to load the preprocessed files directly from the cache?\r\n\r\nA bit ashamed to say but, I have been looking at the script to try to skip the data preprocessing phase, and did not manage to do it. In the output I see warnings saying that the preprocessed data is being loaded from the cache (for various steps), but I see the tdqm process task bar and the process takes a long time, so I am assuming at least some nodes are doing the preprocessing. I don\"t know if this is the usual behavior. ",
"In principle I have avoided this error by putting a sleep, after the preprocessing and before loading the feature processing config json, not the most elegant solution.\r\n\r\nAfterwards we have ran into other unrelated problems but in principle for now the script seems to be working. I am giving feedback just in case it is useful for someone else. ",
"Hey @gullabi,\r\n\r\nExactly in the second step, you should be able to fully skip the dataset processing. In short, I'd strongly advise to create the whole tokenizer file before doing the multi-node run and then this `if-branch`: https://github.com/huggingface/transformers/blob/da27c4b398e607c161451f335367ad666de08497/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L500 should never be run which means that no files sholud be written **and** read at the same time. \r\n\r\nHowever if `sleep` works for you, I think that's also totally fine. So far we've never had a case where multi-node + shared file-system was used for the examples, so this issue here serves as a great readme guide for future such use cases :-)",
"Thanks for the suggestion and I am glad that it the issue is useful for something. For the sake of completeness, I would like to give feedback. Although your suggestion sped up the preprocessing (I used the `--tokenizer_name_or_path` cli parameter), it does not directly solve the problem we were facing. \r\n\r\nThe problem was at the step where the feature extractor, config and the tokenizer files are written, and just before the training starts. The errors were pointing out specifically to loading the feature extractor config right before the training. Realizing that the problem is simultaneous reading and writing going by multiple nodes to the same file systems, the problematic write was here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/da27c4b398e607c161451f335367ad666de08497/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L675\r\n\r\nSo I put a sleep right after writing this `if is_main_process` block. "
] | 1,653
| 1,656
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- Huggingface_hub version: 0.1.2
- PyTorch version (GPU?): 1.9.0+rocm4.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
```
### Who can help?
@patrickvonplaten, @anton-l, @lhoestq
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to produce the behavior:
1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) in order to run the custom `run_speech_recognition_ctc.py` script
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
The processes fail with the error:
```
Traceback (most recent call last):
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 816, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 560, in main
os.remove(vocab_file)
FileNotFoundError: [Errno 2] No such file or directory: 'wav2vec2-xls-r-300m-ca_dist/vocab.json'
```
as both nodes are seeing the `vocab_file` and trying to delete it at the same time, but since nodes are on a shared file system, the training fails.
As further information, when the `os.remove` is escaped via
```
with training_args.main_process_first():
if training_args.overwrite_output_dir and os.path.isfile(vocab_file):
try:
os.remove(vocab_file)
except Exception as e:
logger.info(e)
```
the runner trains the model successfully until the first checkpoint. However, during the evaluation just before saving the checkpoint to the file system this error occurs:
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/
run_speech_recognition_ctc.py", line 819, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/
run_speech_recognition_ctc.py", line 770, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/
lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch,
ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/
lib/python3.7/site-packages/transformers/trainer.py", line 1624, in
_maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/
lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/
lib/python3.7/site-packages/transformers/trainer.py", line 2535, in
evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds,
label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/
run_speech_recognition_ctc.py", line 720, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k,
v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/
run_speech_recognition_ctc.py", line 720, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k,
v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/
lib/python3.7/site-packages/datasets/metric.py", line 444, in compute
os.remove(file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/home/bsc88/bsc88474
/.cache/huggingface/metrics/wer/default/default_experiment-1-0.arrow'
```
This is presumably the metric evaluation is done on all nodes, but since they are in a shared file system removal of the cached evaluation files creates a conflict.
In principle, transformers library has a context `main_process_first` which in the case `local=False` is passed only the main node of the multi-node executes the tasks. The metric calculation is not within this context and we are not sure whether (apart from the `os.remove(vocab.json)` problem) the solution is to add the context [here](https://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L659).
Since this issue is also related to the filelock processes within the `datasets` library we also included `@lhoestq` as a person who can help.
### Expected behavior
```shell
The training process runs successfully without producing any errors concerning the write/delete process conflicts or any other error related to the file locks.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17412/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17411/events
|
https://github.com/huggingface/transformers/issues/17411
| 1,247,714,625
|
I_kwDOCUB6oc5KXp1B
| 17,411
|
can't run (TF)BartForConditionalGeneration.generation on GPU, it's speed very very very slow
|
{
"login": "TheHonestBob",
"id": 58240629,
"node_id": "MDQ6VXNlcjU4MjQwNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/58240629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheHonestBob",
"html_url": "https://github.com/TheHonestBob",
"followers_url": "https://api.github.com/users/TheHonestBob/followers",
"following_url": "https://api.github.com/users/TheHonestBob/following{/other_user}",
"gists_url": "https://api.github.com/users/TheHonestBob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheHonestBob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheHonestBob/subscriptions",
"organizations_url": "https://api.github.com/users/TheHonestBob/orgs",
"repos_url": "https://api.github.com/users/TheHonestBob/repos",
"events_url": "https://api.github.com/users/TheHonestBob/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheHonestBob/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey @TheHonestBob 👋 We are aware of the generate speed problems with TensorFlow, and will be releasing an update very soon. It is not a bug, but rather how Eager Execution works, sadly. Stay tuned 🤞 ",
"> Hey @TheHonestBob 👋 We are aware of the generate speed problems with TensorFlow, and will be releasing an update very soon. It is not a bug, but rather how Eager Execution works, sadly. Stay tuned 🤞\r\n\r\nthanks for your reply,what can I do before update to solve it.",
"My advice would be to go with the PyTorch version, if performance is a bottleneck to you and you need something working in the next ~2 weeks. If you can afford to wait ~2 weeks, then you can have a look at the guides we are writing up at the moment :) ",
"> My advice would be to go with the PyTorch version, if performance is a bottleneck to you and you need something working in the next ~2 weeks. If you can afford to wait ~2 weeks, then you can have a look at the guides we are writing up at the moment :)\r\n\r\nOK, I will continue to pay attention no it",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@TheHonestBob -- some of the functionality to speed up has been merged recently. If you try running a modified version of your script and you have a GPU, you will see it is much much faster.\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import BertTokenizer, TFBartForConditionalGeneration\r\ntokenizer = BertTokenizer.from_pretrained(\"fnlp/bart-base-chinese\")\r\nmodel = TFBartForConditionalGeneration.from_pretrained(\"fnlp/bart-base-chinese\", from_pt=True)\r\nbatch_data = ['北京是[MASK]的首都']*64\r\nxla_generate = tf.function(model.generate, jit_compile=True)\r\nfor i in range(20):\r\n batch_dict = tokenizer.batch_encode_plus(batch_data, return_token_type_ids=False, return_tensors='tf')\r\n result = xla_generate(**batch_dict, max_length=20, no_repeat_ngram_size=0, num_beams=1)\r\n result = tokenizer.batch_decode(result, skip_special_tokens=True)\r\n print(result)\r\n```\r\n\r\nTo enable bigger values of `num_beams`, which should increase the quality of the generation, [this PR](https://github.com/huggingface/transformers/pull/17857) has to be merged first :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@TheHonestBob The newest release (v4.21) fixes this issue. Check our recent blog post -- https://huggingface.co/blog/tf-xla-generate",
"> @TheHonestBob The newest release (v4.21) fixes this issue. Check our recent blog post -- https://huggingface.co/blog/tf-xla-generate\r\n\r\nthanks a lot, I'll try it"
] | 1,653
| 1,659
| 1,659
|
NONE
| null |
### System Info
```shell
transformers==4.19
tensorflow-gpu==2.3
torch==1.11
```
### Who can help?
@patil-suraj@patrickvonplaten, @Narsil, @gante@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import BertTokenizer, TFBartForConditionalGeneration
tokenizer = BertTokenizer.from_pretrained("fnlp/bart-base-chinese")
model = TFBartForConditionalGeneration.from_pretrained("fnlp/bart-base-chinese", from_pt=True)
batch_data = ['北京是[MASK]的首都']*64
for i in range(20):
batch_dict = tokenizer.batch_encode_plus(batch_data, return_token_type_ids=False, return_tensors='tf')
result = model.generate(**batch_dict, max_length=20)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
print(result)
### Expected behavior
```shell
1 . when I run CUDA_VISIBLE_DEVICES=1 python test.py, GPU's memory is used,GPU utilization is almost zero,and generate speed is very very very slow, cpu utilization is 100%.
2. when i replace TFBartForConditionalGeneration with BartForConditionalGeneration, GPU's memory is used,
GPU utilization is almost zero,cpu utilization greater than 100%,speed is normal,but it mean that, generate is on cpu not GPU.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17411/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17411/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17410/events
|
https://github.com/huggingface/transformers/issues/17410
| 1,247,488,389
|
I_kwDOCUB6oc5KWymF
| 17,410
|
TFBartForConditionalGeneration.generate is very very slow,but not BartForConditionalGeneration.generate。
|
{
"login": "TheHonestBob",
"id": 58240629,
"node_id": "MDQ6VXNlcjU4MjQwNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/58240629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheHonestBob",
"html_url": "https://github.com/TheHonestBob",
"followers_url": "https://api.github.com/users/TheHonestBob/followers",
"following_url": "https://api.github.com/users/TheHonestBob/following{/other_user}",
"gists_url": "https://api.github.com/users/TheHonestBob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheHonestBob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheHonestBob/subscriptions",
"organizations_url": "https://api.github.com/users/TheHonestBob/orgs",
"repos_url": "https://api.github.com/users/TheHonestBob/repos",
"events_url": "https://api.github.com/users/TheHonestBob/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheHonestBob/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,653
| 1,653
| 1,653
|
NONE
| null |
when I use TFBartForConditionalGeneration.generate so slow, but BartForConditionalGeneration.generate is ok, here is some suggestion for me, thinks a lot.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17410/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17409/events
|
https://github.com/huggingface/transformers/pull/17409
| 1,247,389,890
|
PR_kwDOCUB6oc44aG5z
| 17,409
|
fix layoutlmv2 doc page
|
{
"login": "garyhlai",
"id": 22721482,
"node_id": "MDQ6VXNlcjIyNzIxNDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/22721482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garyhlai",
"html_url": "https://github.com/garyhlai",
"followers_url": "https://api.github.com/users/garyhlai/followers",
"following_url": "https://api.github.com/users/garyhlai/following{/other_user}",
"gists_url": "https://api.github.com/users/garyhlai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garyhlai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garyhlai/subscriptions",
"organizations_url": "https://api.github.com/users/garyhlai/orgs",
"repos_url": "https://api.github.com/users/garyhlai/repos",
"events_url": "https://api.github.com/users/garyhlai/events{/privacy}",
"received_events_url": "https://api.github.com/users/garyhlai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17409). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Quick follow up PR to #17168 to address @NielsRogge 's comments
Clarify that the `torchvision` and `tesseract` packages are optional dependencies for LayoutLMv2.
## Who can review?
@sgugger @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17409/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17409",
"html_url": "https://github.com/huggingface/transformers/pull/17409",
"diff_url": "https://github.com/huggingface/transformers/pull/17409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17409.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17408/events
|
https://github.com/huggingface/transformers/pull/17408
| 1,247,161,883
|
PR_kwDOCUB6oc44ZVJj
| 17,408
|
Make check_init script more robust and clean inits
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
This PR was triggered by the inits deactivating the formatter (see `wav2_vec2_with_lm` below) which was a bit sad. In one line, `check_inits.py` was unable to parse `_import_structure` initialized in one line, this PR addresses that and cleans many model inits.
HuBERT also contained some reference to Wav2Vec2FeatureExtractor, which should not be the case. This PR cleans that up as well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17408/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17408",
"html_url": "https://github.com/huggingface/transformers/pull/17408",
"diff_url": "https://github.com/huggingface/transformers/pull/17408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17408.patch",
"merged_at": 1653477837000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17407/events
|
https://github.com/huggingface/transformers/pull/17407
| 1,247,103,616
|
PR_kwDOCUB6oc44ZIrp
| 17,407
|
Fix README localizer script
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
Currently, the script that updates the localized READMEs does not remove duplicates. This PR fixes that.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17407/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17407",
"html_url": "https://github.com/huggingface/transformers/pull/17407",
"diff_url": "https://github.com/huggingface/transformers/pull/17407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17407.patch",
"merged_at": 1653477821000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17406/events
|
https://github.com/huggingface/transformers/issues/17406
| 1,247,100,364
|
I_kwDOCUB6oc5KVT3M
| 17,406
|
NotebookProgressCallback doesn't work in databricks notebooks properly--should either be fixed or removed from trainer automatically if it is a databricks runtime
|
{
"login": "Ashvio",
"id": 11758693,
"node_id": "MDQ6VXNlcjExNzU4Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11758693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashvio",
"html_url": "https://github.com/Ashvio",
"followers_url": "https://api.github.com/users/Ashvio/followers",
"following_url": "https://api.github.com/users/Ashvio/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashvio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashvio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashvio/subscriptions",
"organizations_url": "https://api.github.com/users/Ashvio/orgs",
"repos_url": "https://api.github.com/users/Ashvio/repos",
"events_url": "https://api.github.com/users/Ashvio/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashvio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Do you know which test we could do to easily detect if we are in a `Databrick` runtime?",
"> Do you know which test we could do to easily detect if we are in a `Databrick` runtime?\r\n\r\nFrom stackoverflow:\r\n\r\n```\r\ndef isRunningInDatabricks(): Boolean = \r\n sys.env.contains(\"DATABRICKS_RUNTIME_VERSION\")\r\n```\r\n\r\n",
"`sys.env` is not something that exists. Did you mean `os.environ`?",
"Could you try if the PR mentioned above does solve your problem?",
"Hi, just want to add, since I experienced the same issue in the past.\r\n\r\nI believe the reason why the HTML produced by `NotebookProgressCallback` is not displayed properly is because Databricks (with runtime version prior to 11.0) is not using IPython kernel to execute the Python code.\r\n\r\nThere was a guide how to set Databricks to use IPython kernel. And in my experience, when this is set, the evaluation result table produced by `NotebookProgressCallback` is displayed properly. \r\n\r\nhttps://web.archive.org/web/20211227103927/https://docs.microsoft.com/en-us/azure/databricks/notebooks/ipython-kernel\r\n\r\nMost users, however, I believe will use the default setting i.e. not overriding Databricks default setting to specifically use IPython kernel. Therefore, the changes in this [commit](https://github.com/huggingface/transformers/pull/17496) looks good.\r\n\r\nHowever, in the most recent Databricks runtime version 11.0, IPython kernel is now the default Python code execution engine. Therefore, the HTML produced by `NotebookProgressCallback` I believe can be displayed **correctly** by default in Databricks runtime 11.x\r\n\r\nhttps://docs.microsoft.com/en-us/azure/databricks/notebooks/ipython-kernel\r\n\r\nI suggest, in addition to checking if this environment variable `DATABRICKS_RUNTIME_VERSION` is set, we should also check the version. If the version is 11.x, I believe it is ok to use the `NotebookProgressCallback`. It can show the table HTML output properly in my test.\r\n\r\n\r\n\r\n",
"If you want to make a PR with the adjustment, I'll be happy to look at it!",
"> If you want to make a PR with the adjustment, I'll be happy to look at it!\r\n\r\nhttps://github.com/huggingface/transformers/pull/17988\r\n\r\nThanks"
] | 1,653
| 1,656
| 1,654
|
NONE
| null |
https://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/src/transformers/utils/notebook.py#L269
Hey all,
Using databricks for training, the default Trainer behavior will automatically add NotebookProgressCallback for databricks notebooks, but the databricks UI currently does not display the output properly. It just prints a bunch of text saying `<IPython.core.display.HTML object>` over and over. This is likely an issue on Databricks' end, so I recommend not adding this callback if the transformers library can detect it is a databricks runtime not a jupyter/google collab notebook. I think there should also be an easier way to delete specific callbacks--it took a long time to trace this issue and reading source code to figure out what the source cause is. I am circumventing the issue for now by popping the callback from the trainer callback handlers list but that is not a good pattern.
Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17406/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17405/events
|
https://github.com/huggingface/transformers/issues/17405
| 1,247,095,456
|
I_kwDOCUB6oc5KVSqg
| 17,405
|
Unable to instantiate ImageGPTFeatureExtractor
|
{
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This is because you do not have `Pillow` installed. We have fixed the error message in #17289 (will be in the next release but is not in 4.19.2) to let you know explicitly you should do `pip install pillow`.",
"Thanks @sgugger but unfortunately installing Pillow didn't fix it for me. While debugging, I can see that the method `feature_extractor_class_from_name` fails to return the correct class and returns `None` instead.\r\n\r\nI guess it's because in my local installation `FEATURE_EXTRACTOR_MAPPING_NAMES` is defined as follows:\r\n\r\n```python\r\nFEATURE_EXTRACTOR_MAPPING_NAMES = OrderedDict(\r\n [\r\n (\"beit\", \"BeitFeatureExtractor\"),\r\n (\"detr\", \"DetrFeatureExtractor\"),\r\n (\"deit\", \"DeiTFeatureExtractor\"),\r\n (\"hubert\", \"Wav2Vec2FeatureExtractor\"),\r\n (\"speech_to_text\", \"Speech2TextFeatureExtractor\"),\r\n (\"vit\", \"ViTFeatureExtractor\"),\r\n (\"wav2vec2\", \"Wav2Vec2FeatureExtractor\"),\r\n (\"detr\", \"DetrFeatureExtractor\"),\r\n (\"layoutlmv2\", \"LayoutLMv2FeatureExtractor\"),\r\n (\"clip\", \"CLIPFeatureExtractor\"),\r\n (\"flava\", \"FlavaFeatureExtractor\"),\r\n (\"perceiver\", \"PerceiverFeatureExtractor\"),\r\n (\"swin\", \"ViTFeatureExtractor\"),\r\n (\"vit_mae\", \"ViTFeatureExtractor\"),\r\n (\"segformer\", \"SegformerFeatureExtractor\"),\r\n (\"convnext\", \"ConvNextFeatureExtractor\"),\r\n (\"van\", \"ConvNextFeatureExtractor\"),\r\n (\"resnet\", \"ConvNextFeatureExtractor\"),\r\n (\"regnet\", \"ConvNextFeatureExtractor\"),\r\n (\"poolformer\", \"PoolFormerFeatureExtractor\"),\r\n (\"maskformer\", \"MaskFormerFeatureExtractor\"),\r\n (\"data2vec-audio\", \"Wav2Vec2FeatureExtractor\"),\r\n (\"data2vec-vision\", \"BeitFeatureExtractor\"),\r\n (\"dpt\", \"DPTFeatureExtractor\"),\r\n (\"glpn\", \"GLPNFeatureExtractor\"),\r\n (\"yolos\", \"YolosFeatureExtractor\"),\r\n ]\r\n)\r\n```\r\nSo it's clearly missing `imagegpt` and this justifies the fail. Any ideas? \r\n\r\nI verified that my version of Transformers is the following:\r\n\r\n```\r\nName: transformers\r\nVersion: 4.19.2\r\nSummary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow\r\nHome-page: https://github.com/huggingface/transformers\r\nAuthor: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)\r\nAuthor-email: transformers@huggingface.co\r\nLicense: Apache\r\nLocation: /Users/as2180/workspace/perceptual-simulator/.venv/lib/python3.9/site-packages\r\nRequires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, tokenizers, tqdm\r\n```\r\n\r\nInstead if I install the version from Github, I can get the correct file. Is the wheel on pip up to date?",
"Hi,\r\n\r\nThis was fixed yesterday (#16871) so you need to install Transformers from source:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.9.9
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@NielsRogge @sgugger
Looks like the `ImageGPTFeatureExtractor` is among the feature extractors supported at the moment but I cannot resolve it with the latest version available on Pip.
I can see it's available here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/feature_extraction_auto.py#L53
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code:
```
extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small")
```
Stacktrace:
```
feature_extractor_class = feature_extractor_class_from_name(feature_extractor_class)
> return feature_extractor_class.from_dict(config_dict, **kwargs)
E AttributeError: 'NoneType' object has no attribute 'from_dict'
```
### Expected behavior
```shell
Loads feature extractor without any exception.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17405/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17404/events
|
https://github.com/huggingface/transformers/issues/17404
| 1,247,067,318
|
I_kwDOCUB6oc5KVLy2
| 17,404
|
No 'Translation template'
|
{
"login": "mfumanelli",
"id": 53374883,
"node_id": "MDQ6VXNlcjUzMzc0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfumanelli",
"html_url": "https://github.com/mfumanelli",
"followers_url": "https://api.github.com/users/mfumanelli/followers",
"following_url": "https://api.github.com/users/mfumanelli/following{/other_user}",
"gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions",
"organizations_url": "https://api.github.com/users/mfumanelli/orgs",
"repos_url": "https://api.github.com/users/mfumanelli/repos",
"events_url": "https://api.github.com/users/mfumanelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfumanelli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @mfumanelli! Thank you for your issue 🤗. Would the translation be for Italian? @sgugger @osanseviero, would this be a step we would wish to pursue?\r\n\r\nIf that is the case, @mfumanelli, you can use the format in issues #15947 and #16824. Particularly:\r\n- Use informal language.\r\n- Use inclusive language; eg. not letting know any gender and rather talking about \"the people\".\r\n\r\n\r\n",
"Yes @omarespejel, in case it would be for Italian. Perfect for the two suggestions ☺️.\r\n\r\nThen I'll wait to see if it's something you want to pursue at the moment or not, thanks!",
"Yes! Let's do this for Italian and any other language the community would like to help translating :fire: \r\n\r\n@omarespejel, do we have a smaller list of documents that need to get translated that are higher priority? I think Get Started section + Tutorial section is the most important, but I might be wrong",
"@osanseviero I agree that we can start with `Get Started` + `Tutorial` sections. \r\n\r\n@mfumanelli then we can go ahead with opening an issue for Italian following #15947 and #16824 🚀. Thank you for opening this venue. Do you know Italian-speaking communities or individuals interested in collaborating?\r\n\r\nThese would be the first docs to translate:\r\n\r\n### Get Started section\r\n- [] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). \r\n- [] [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx). \r\n\r\n### Tutorial section\r\n- [] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/pipeline_tutorial.mdx) \r\n- [] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) \r\n- [] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/preprocessing.mdx) \r\n- [] [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx) \r\n- [] [accelerate.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx) \r\n- [] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.mdx)\r\n- [] [multilingual.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/multilingual.mdx) ",
"If you agree as a first step I will shortly make a PR to add the file \"Translation template\" to the transformers/.github/ISSUE_TEMPLATE folder. So that anyone who wants to translate into other languages can follow the [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md), and select the \"Translation template\" from the \"New issue\" button 🤗.\r\n\r\nI will then open a dedicated issue for Italian so that other people can collaborate. I will ask among my contacts if others want to help with the translation 🌈 ",
"Thank you @mfumanelli! That's a great idea. We were actually reviewing the translation template in this issue #17028. It's a great time to discuss it further if we want to allow the community to translate into different languages. WDYT @sgugger?\r\n\r\nI think that in the meantime we can start with the Italian question. WDYT @mfumanelli? While we discuss in #17028.\r\n\r\nWow reaching your contacts would be amazing! Also, count on our support to reach for Italian-speaking contributors in our community 🤗\r\n\r\n",
"Perfect, we can proceed with the opening of the issue in Italian then. Thanks! 🤗🌈🌈\r\n\r\ncan I proceed or would you prefer to open it? I don't know if I open it if others can edit it over time to add the various contributors",
"Sure @mfumanelli you can open it! Thank you 🤗\r\n\r\nI can edit it if when necessary, no problem with that. Also please let me know if you have any doubt 🚀",
"Thanks @omarespejel! I created it, you can find it here [#17459](https://github.com/huggingface/transformers/issues/17459). If you agree we can close this issue 🌈",
"Agreed! Thank you @mfumanelli 🤗. On Monday I will send a tweet directed to the Italian-speaking community that wants to contribute to #17459 🇮🇹. I will let you know,"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
Hi!
Following the [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md), I realised that there is no 'Translation template' when I try to open an issue.
I would like to try to fix the bug myself by creating the template from: [#15947](https://github.com/huggingface/transformers/issues/15947) and [#16824](https://github.com/huggingface/transformers/issues/16824), unless the file with the template already exists and is simply in some other folder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17404/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17403/events
|
https://github.com/huggingface/transformers/issues/17403
| 1,246,937,364
|
I_kwDOCUB6oc5KUsEU
| 17,403
|
Error in TAPAS Tokenizer
|
{
"login": "shivangibithel",
"id": 19774925,
"node_id": "MDQ6VXNlcjE5Nzc0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19774925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivangibithel",
"html_url": "https://github.com/shivangibithel",
"followers_url": "https://api.github.com/users/shivangibithel/followers",
"following_url": "https://api.github.com/users/shivangibithel/following{/other_user}",
"gists_url": "https://api.github.com/users/shivangibithel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shivangibithel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivangibithel/subscriptions",
"organizations_url": "https://api.github.com/users/shivangibithel/orgs",
"repos_url": "https://api.github.com/users/shivangibithel/repos",
"events_url": "https://api.github.com/users/shivangibithel/events{/privacy}",
"received_events_url": "https://api.github.com/users/shivangibithel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello @shivangibithel,\r\n\r\nYou are entering the following loop:\r\nhttps://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/src/transformers/tokenization_utils.py#L508-L514\r\n\r\nbecause `do_lower_case` is a paramer that need to be filled in when initializing the tokenizer (or redefined in `from_pretrained`) but not in the `__call__` method. I'm also taking this opportunity to highlight that `do_basic_tokenize` is also a parameter that can't be changed in the `__call__` method.\r\n\r\nI hope this will help you! :relaxed: \r\n\r\nI also take this opportunity to share [some tips](https://github.com/huggingface/transformers/blob/main/ISSUES.md#the-github-issues) that would help us a lot to read quickly your issue :slightly_smiling_face: . ",
"Closing this issue due to inactivity :slightly_smiling_face: "
] | 1,653
| 1,655
| 1,655
|
NONE
| null |
### System Info
```shell
Google Colab CPU
```
### Who can help?
@SaulLu @NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# TODO: should this be in the base class?
https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/tokenization_utils.py#L507
Giving Error in the above function , though the code should not be called
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import re
import regex
import pandas as pd # library for data analysis
import requests # library to handle requests
from bs4 import BeautifulSoup # library to parse HTML documents
import numpy as np
from transformers import TapasTokenizer, TapasModel
import pandas as pd
tokenizer = TapasTokenizer.from_pretrained("google/tapas-base")
model = TapasModel.from_pretrained("google/tapas-base")
table_class="wikitable sortable jquery-tablesorter"
response=requests.get("https://en.wikipedia.org/wiki/2017_EFL_Trophy_Final")
soup = BeautifulSoup(response.text, 'html.parser')
indiatable=soup.find('table',{'class':"wikitable"})
if indiatable:
count +=1
df=pd.read_html(str(indiatable))
df=pd.DataFrame(df[0])
df = df.astype(str, errors='ignore')
queries=["2017 EFL Trophy Final"]
inputs = tokenizer(table=df, queries=queries, do_lower_case=False, do_basic_tokenize=False, padding="max_length",
return_tensors="pt", truncation = True)

### Expected behavior
```shell
This attribute is set false explicitly, thus should not be called
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17403/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17402/events
|
https://github.com/huggingface/transformers/issues/17402
| 1,246,924,606
|
I_kwDOCUB6oc5KUo8-
| 17,402
|
illegal hardware instruction
|
{
"login": "Jasperty",
"id": 37020799,
"node_id": "MDQ6VXNlcjM3MDIwNzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/37020799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jasperty",
"html_url": "https://github.com/Jasperty",
"followers_url": "https://api.github.com/users/Jasperty/followers",
"following_url": "https://api.github.com/users/Jasperty/following{/other_user}",
"gists_url": "https://api.github.com/users/Jasperty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jasperty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jasperty/subscriptions",
"organizations_url": "https://api.github.com/users/Jasperty/orgs",
"repos_url": "https://api.github.com/users/Jasperty/repos",
"events_url": "https://api.github.com/users/Jasperty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jasperty/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I think this is due to you installation of tensorflow and not necessarily the library . \r\nDid you try installing with \r\n```\r\nconda install -c apple tensorflow-deps\r\npython -m pip install tensorflow-macos\r\npython -m pip install tensorflow-metal\r\n```\r\n\r\nAlso it is recommended to install using [MiniForge](https://github.com/conda-forge/miniforge#miniforge3)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I'm having the same issue. It happened with the same code recently after installing Big Sur.",
"I am still getting this in macOS. "
] | 1,653
| 1,695
| 1,656
|
NONE
| null |
### System Info
```shell
i use macbookpro, and pip install transformers, but i get this error:
>>> from transformers import DistilBertConfig
zsh: illegal hardware instruction python
could you help me?
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
zsh: illegal hardware instruction python
### Expected behavior
```shell
zsh: illegal hardware instruction python
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17402/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17401/events
|
https://github.com/huggingface/transformers/pull/17401
| 1,246,917,965
|
PR_kwDOCUB6oc44Yg0V
| 17,401
|
Add test for new model parallelism features
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds common tests for the new model parallelism/CPU offload features. Those are activated for models having a `_no_split_modules` attribute, so for now GPT-2, GPT-J, OPT and T5. The tests are only run on GPU and multi-GPU (so the CI won't catch any failure on the PR) but they all pass locally for me.
In passing I hit two blockers which this PR fixes:
- the ability to pass `max_memory` directly into `from_pretrained` to limit the memory used in `device_map="auto"`
- the CPU offload wasn't working with T5 because of some `device` taken from the model parameters instead of the input.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17401/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17401",
"html_url": "https://github.com/huggingface/transformers/pull/17401",
"diff_url": "https://github.com/huggingface/transformers/pull/17401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17401.patch",
"merged_at": 1653490287000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17400/events
|
https://github.com/huggingface/transformers/pull/17400
| 1,246,916,386
|
PR_kwDOCUB6oc44YgeS
| 17,400
|
Bump tensorflow from 2.8.0 to 2.8.1 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.0 to 2.8.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p>
<blockquote>
<h2>TensorFlow 2.8.1</h2>
<h1>Release 2.8.1</h1>
<p>This releases introduces several vulnerability fixes:</p>
<ul>
<li>Fixes a code injection in <code>saved_model_cli</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216">CVE-2022-29216</a>)</li>
<li>Fixes a missing validation which causes <code>TensorSummaryV2</code> to crash (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193">CVE-2022-29193</a>)</li>
<li>Fixes a missing validation which crashes <code>QuantizeAndDequantizeV4Grad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192">CVE-2022-29192</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>DeleteSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194">CVE-2022-29194</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>GetSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191">CVE-2022-29191</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>StagePeek</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195">CVE-2022-29195</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>UnsortedSegmentJoin</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197">CVE-2022-29197</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LoadAndRemapMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199">CVE-2022-29199</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>SparseTensorToCSRSparseMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198">CVE-2022-29198</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LSTMBlockCell</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200">CVE-2022-29200</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>Conv3DBackpropFilterV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196">CVE-2022-29196</a>)</li>
<li>Fixes a <code>CHECK</code> failure in depthwise ops via overflows (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197">CVE-2021-41197</a>)</li>
<li>Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207">CVE-2022-29207</a>)</li>
<li>Fixes a segfault due to missing support for quantized types (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205">CVE-2022-29205</a>)</li>
<li>Fixes a missing validation which results in undefined behavior in <code>SparseTensorDenseAdd</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206">CVE-2022-29206</a>)</li>
<li>Fixes a missing validation which results in undefined behavior in <code>QuantizedConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201">CVE-2022-29201</a>)</li>
<li>Fixes an integer overflow in <code>SpaceToBatchND</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203">CVE-2022-29203</a>)</li>
<li>Fixes a segfault and OOB write due to incomplete validation in <code>EditDistance</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208">CVE-2022-29208</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>Conv3DBackpropFilterV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204">CVE-2022-29204</a>)</li>
<li>Fixes a denial of service in <code>tf.ragged.constant</code> due to lack of validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202">CVE-2022-29202</a>)</li>
<li>Fixes a segfault when <code>tf.histogram_fixed_width</code> is called with NaN values (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211">CVE-2022-29211</a>)</li>
<li>Fixes a core dump when loading TFLite models with quantization (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212">CVE-2022-29212</a>)</li>
<li>Fixes crashes stemming from incomplete validation in signal ops (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213">CVE-2022-29213</a>)</li>
<li>Fixes a type confusion leading to <code>CHECK</code>-failure based denial of service (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209">CVE-2022-29209</a>)</li>
<li>Fixes a heap buffer overflow due to incorrect hash function (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210">CVE-2022-29210</a>)</li>
<li>Updates <code>curl</code> to <code>7.83.1</code> to handle (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576">CVE-2022-22576</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774">CVE-2022-27774</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775">CVE-2022-27775</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776">CVE-2022-27776</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778">CVE-2022-27778</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779">CVE-2022-27779</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780">CVE-2022-27780</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781">CVE-2022-27781</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782">CVE-2022-27782</a> and (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115">CVE-2022-30115</a></li>
<li>Updates <code>zlib</code> to <code>1.2.12</code> after <code>1.2.11</code> was pulled due to <a href="https://www.openwall.com/lists/oss-security/2022/03/28/1">security issue</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p>
<blockquote>
<h1>Release 2.8.1</h1>
<p>This releases introduces several vulnerability fixes:</p>
<ul>
<li>Fixes a code injection in <code>saved_model_cli</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216">CVE-2022-29216</a>)</li>
<li>Fixes a missing validation which causes <code>TensorSummaryV2</code> to crash (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193">CVE-2022-29193</a>)</li>
<li>Fixes a missing validation which crashes <code>QuantizeAndDequantizeV4Grad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192">CVE-2022-29192</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>DeleteSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194">CVE-2022-29194</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>GetSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191">CVE-2022-29191</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>StagePeek</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195">CVE-2022-29195</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>UnsortedSegmentJoin</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197">CVE-2022-29197</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LoadAndRemapMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199">CVE-2022-29199</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>SparseTensorToCSRSparseMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198">CVE-2022-29198</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LSTMBlockCell</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200">CVE-2022-29200</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>Conv3DBackpropFilterV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196">CVE-2022-29196</a>)</li>
<li>Fixes a <code>CHECK</code> failure in depthwise ops via overflows (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197">CVE-2021-41197</a>)</li>
<li>Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207">CVE-2022-29207</a>)</li>
<li>Fixes a segfault due to missing support for quantized types (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205">CVE-2022-29205</a>)</li>
<li>Fixes a missing validation which results in undefined behavior in <code>SparseTensorDenseAdd</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206">CVE-2022-29206</a>)</li>
<li>Fixes a missing validation which results in undefined behavior in <code>QuantizedConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201">CVE-2022-29201</a>)</li>
<li>Fixes an integer overflow in <code>SpaceToBatchND</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203">CVE-2022-29203</a>)</li>
<li>Fixes a segfault and OOB write due to incomplete validation in <code>EditDistance</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208">CVE-2022-29208</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>Conv3DBackpropFilterV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204">CVE-2022-29204</a>)</li>
<li>Fixes a denial of service in <code>tf.ragged.constant</code> due to lack of validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202">CVE-2022-29202</a>)</li>
<li>Fixes a segfault when <code>tf.histogram_fixed_width</code> is called with NaN values (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211">CVE-2022-29211</a>)</li>
<li>Fixes a core dump when loading TFLite models with quantization (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212">CVE-2022-29212</a>)</li>
<li>Fixes crashes stemming from incomplete validation in signal ops (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213">CVE-2022-29213</a>)</li>
<li>Fixes a type confusion leading to <code>CHECK</code>-failure based denial of service (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209">CVE-2022-29209</a>)</li>
<li>Fixes a heap buffer overflow due to incorrect hash function (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210">CVE-2022-29210</a>)</li>
<li>Updates <code>curl</code> to <code>7.83.1</code> to handle (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576">CVE-2022-22576</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774">CVE-2022-27774</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775">CVE-2022-27775</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776">CVE-2022-27776</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778">CVE-2022-27778</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779">CVE-2022-27779</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780">CVE-2022-27780</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781">CVE-2022-27781</a>, (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782">CVE-2022-27782</a> and (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115">CVE-2022-30115</a></li>
<li>Updates <code>zlib</code> to <code>1.2.12</code> after <code>1.2.11</code> was pulled due to <a href="https://www.openwall.com/lists/oss-security/2022/03/28/1">security issue</a></li>
</ul>
<h1>Release 2.7.2</h1>
<p>This releases introduces several vulnerability fixes:</p>
<ul>
<li>Fixes a code injection in <code>saved_model_cli</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216">CVE-2022-29216</a>)</li>
<li>Fixes a missing validation which causes <code>TensorSummaryV2</code> to crash (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193">CVE-2022-29193</a>)</li>
<li>Fixes a missing validation which crashes <code>QuantizeAndDequantizeV4Grad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192">CVE-2022-29192</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>DeleteSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194">CVE-2022-29194</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>GetSessionTensor</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191">CVE-2022-29191</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>StagePeek</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195">CVE-2022-29195</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>UnsortedSegmentJoin</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197">CVE-2022-29197</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LoadAndRemapMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199">CVE-2022-29199</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>SparseTensorToCSRSparseMatrix</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198">CVE-2022-29198</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>LSTMBlockCell</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200">CVE-2022-29200</a>)</li>
<li>Fixes a missing validation which causes denial of service via <code>Conv3DBackpropFilterV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196">CVE-2022-29196</a>)</li>
<li>Fixes a <code>CHECK</code> failure in depthwise ops via overflows (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197">CVE-2021-41197</a>)</li>
<li>Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207">CVE-2022-29207</a>)</li>
<li>Fixes a segfault due to missing support for quantized types (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205">CVE-2022-29205</a>)</li>
<li>Fixes a missing validation which results in undefined behavior in <code>SparseTensorDenseAdd</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206">CVE-2022-29206</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/tensorflow/tensorflow/commit/0516d4d8bced506cae97dc3cb45dbd2fe4311f26"><code>0516d4d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56035">#56035</a> from tensorflow-jenkins/relnotes-2.8.1-4205</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/25faa9f51698b743af7f66304efa2d412a15427a"><code>25faa9f</code></a> Update RELEASE.md</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/0d75d6ad32402c939ca29b73de47ea2b2b3a03d2"><code>0d75d6a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56074">#56074</a> from tensorflow/fix-r2.8-build</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/b82dff5267ac2b6bac124d24929b2b4a891338a8"><code>b82dff5</code></a> Install dep as user</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/1e7468765d5aac6208b3df06dea8747aea2dd7d5"><code>1e74687</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56071">#56071</a> from tensorflow/fix-r2.8-build</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/86bbc7004f0631a194ae1ea48f8f6b69811cdb84"><code>86bbc70</code></a> Another attempt at fixing <a href="https://github-redirect.dependabot.com/pypa/setuptools/issues/3293">pypa/setuptools#3293</a></li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/fd5fbebf32a09030ea30f2324ed2276b104e3c9c"><code>fd5fbeb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56068">#56068</a> from tensorflow/mm-cp-52488e5072f6fe44411d70c6af09e...</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/bdb80bc3de2f412cf27747f9e68d93f5a69283ce"><code>bdb80bc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56060">#56060</a> from yongtang:curl-7.83.1</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/3f8784c87c7647b11683e8b7a21b355e03a570b4"><code>3f8784c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/56064">#56064</a> from tensorflow/mihaimaruseac-patch-1</li>
<li><a href="https://github.com/tensorflow/tensorflow/commit/0da453f218b4ed7c53fa2b5a0fcb5b272944fbb3"><code>0da453f</code></a> Fix pip install ordering</li>
<li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.0...v2.8.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17400/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17400",
"html_url": "https://github.com/huggingface/transformers/pull/17400",
"diff_url": "https://github.com/huggingface/transformers/pull/17400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17400.patch",
"merged_at": 1653435416000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17399/events
|
https://github.com/huggingface/transformers/issues/17399
| 1,246,769,470
|
I_kwDOCUB6oc5KUDE-
| 17,399
|
[RFC] Scan & Gradient checkpointing in Flax
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"I'm not sure you would need both versions within a same script (scan and unscanned, or with and without checkpointing which affects only training anyway).\r\n\r\nThen maybe you could just add it directly as an arg to `model.from_pretrained(..., scan=False, gradient_checkpointing=False)`\r\n\r\nYou would just have to use some naming conventions on your params to see if you need to scan/unscan when loading a checkpoint.",
"Suppose you have a training script, it would be useful to be able to use `scan` and `remat` during training for faster compile times and larger batch sizes, and then switch to `unscan` and no `remat` during eval for faster inference?",
"I'm not sure it would be worth it:\r\n* Most of the time evaluation is relatively fast\r\n* You would have to reformat your parameters each time between eval and train, potentially leading to memory fragmentation",
"Hey @patrickvonplaten, I'm keen to get gradient checkpointing working in JAX for [long-t5](https://huggingface.co/google/long-t5-tglobal-xl/tree/main). If this is not on the cards to be added soon happy to work on a PR for it if that works with you all?",
"Hey @KMFODA! There's a PR that is close to being merged: https://github.com/huggingface/transformers/pull/17843 I'll let you know once it's complete, and you can copy the logic across to Flax T5 in a new PR if that sounds good to you!"
] | 1,653
| 1,656
| null |
MEMBER
| null |
### Feature request
We should add scan and remat (gradient checkpointing) to the most important Flax/JAX models (BERT, GPT2, OPT, T5, BART, Wav2Vec2).
### Motivation
Scan allows for much faster compilation and memory savings and `remat` is the equivalent of `gradient_checkpointing` in PyTorch.
@sanchit-gandhi already uses both features in the Flax Seq2Seq Speech project - see: https://github.com/sanchit-gandhi/seq2seq-speech so it'd be quite trivial to get them working.
**Implementation details:**
Given that both `scan` and `remat` are not related to the model architecture, they should IMO **not** be in the model's config (We've done this mistake in PyTorch and don't want to repeat it here).
I would advocate for the following API:
```python
model = FlaxBertForMaskedLM.from_pretrained("bert-base-cased")
model.scan() # or model.scan_enable()
model.unscan() # or model.scan_disable()
```
and
```python
model = FlaxBertForMaskedLM.from_pretrained("bert-base-cased")
model.gradient_checkpoint_enable()
model.gradient_checkpoint_disable()
```
As can be seen here: https://github.com/sanchit-gandhi/seq2seq-speech/blob/b28d0c25c8fad0f9ffa6707f91f7aba320d44a4b/models/modeling_flax_wav2vec2.py#L504
We'll need to re-initialize the `flax.linen.module` inside the model. However this should be fine since it just means that we do
```
self.module = self.module_class(config=config, dtype=dtype, use_scan=True, **kwargs)
self. _is_scan_enabled = True
```
similar to this line: https://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L868
We can see along the PR how much logic can reside in `modeling_flax_utils.py` and how much would go into the specific models, *e.g.* `modeling_flax_wav2vec2.py`.
The same API / logic could be used for the `gradient_checkpointing`.
### Your contribution
Happy to give this implementation a shot with @sanchit-gandhi and @patil-suraj .
Also would love to hear feedback from @borisdayma @marcvanzee about the API
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17399/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/17398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17398/events
|
https://github.com/huggingface/transformers/pull/17398
| 1,246,660,597
|
PR_kwDOCUB6oc44Xpcp
| 17,398
|
typo IBERT in __repr__ quant_mode
|
{
"login": "scratchmex",
"id": 4014888,
"node_id": "MDQ6VXNlcjQwMTQ4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4014888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scratchmex",
"html_url": "https://github.com/scratchmex",
"followers_url": "https://api.github.com/users/scratchmex/followers",
"following_url": "https://api.github.com/users/scratchmex/following{/other_user}",
"gists_url": "https://api.github.com/users/scratchmex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scratchmex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scratchmex/subscriptions",
"organizations_url": "https://api.github.com/users/scratchmex/orgs",
"repos_url": "https://api.github.com/users/scratchmex/repos",
"events_url": "https://api.github.com/users/scratchmex/events{/privacy}",
"received_events_url": "https://api.github.com/users/scratchmex/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@LysandreJik @kssteven418 ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17397
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17398/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17398",
"html_url": "https://github.com/huggingface/transformers/pull/17398",
"diff_url": "https://github.com/huggingface/transformers/pull/17398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17398.patch",
"merged_at": 1653983290000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17397/events
|
https://github.com/huggingface/transformers/issues/17397
| 1,246,656,307
|
I_kwDOCUB6oc5KTncz
| 17,397
|
typo IBERT in `__repr__`
|
{
"login": "scratchmex",
"id": 4014888,
"node_id": "MDQ6VXNlcjQwMTQ4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4014888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scratchmex",
"html_url": "https://github.com/scratchmex",
"followers_url": "https://api.github.com/users/scratchmex/followers",
"following_url": "https://api.github.com/users/scratchmex/following{/other_user}",
"gists_url": "https://api.github.com/users/scratchmex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scratchmex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scratchmex/subscriptions",
"organizations_url": "https://api.github.com/users/scratchmex/orgs",
"repos_url": "https://api.github.com/users/scratchmex/repos",
"events_url": "https://api.github.com/users/scratchmex/events{/privacy}",
"received_events_url": "https://api.github.com/users/scratchmex/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
should be `quant_mode: {self.quant_mode}` in here
https://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/src/transformers/models/ibert/quant_modules.py#L150-L155
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17397/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17396/events
|
https://github.com/huggingface/transformers/issues/17396
| 1,246,607,254
|
I_kwDOCUB6oc5KTbeW
| 17,396
|
check min version
|
{
"login": "milad1378yz",
"id": 62007769,
"node_id": "MDQ6VXNlcjYyMDA3NzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/62007769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milad1378yz",
"html_url": "https://github.com/milad1378yz",
"followers_url": "https://api.github.com/users/milad1378yz/followers",
"following_url": "https://api.github.com/users/milad1378yz/following{/other_user}",
"gists_url": "https://api.github.com/users/milad1378yz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milad1378yz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milad1378yz/subscriptions",
"organizations_url": "https://api.github.com/users/milad1378yz/orgs",
"repos_url": "https://api.github.com/users/milad1378yz/repos",
"events_url": "https://api.github.com/users/milad1378yz/events{/privacy}",
"received_events_url": "https://api.github.com/users/milad1378yz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hello! The `4.20.dev0` version means that it's the current `main` branch. The recommended way to run these examples is to clone the repository.\r\n\r\nSee the following note: https://github.com/huggingface/transformers/tree/main/examples#important-note",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
Hi, in transformers/examples/pytorch/text-classification/run_glue.py in line 50, there is a line of code :
'''
check_min_version("4.20.0.dev0")
'''
and there is no 4.20.0 version so I think it should be corrected.
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
python transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--train_file ./train.csv \
--validation_file ./test.csv \
--do_train \
--do_eval \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 5e-5 \
--max_seq_length 128 \
--num_train_epochs 2 \
--seed 2021\
--output_dir /yazdani/tmp/imdb/
### Expected behavior
```shell
change this part to
'''
check_min_version("4.18.0.dev0")
'''
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17396/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17395/events
|
https://github.com/huggingface/transformers/pull/17395
| 1,246,573,406
|
PR_kwDOCUB6oc44XW-6
| 17,395
|
Fix expected value for OPT test `test_inference_no_head`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
- Update the expected value in the test `OPTModelIntegrationTests.test_inference_no_head` to have more precisions
- Lower atol to `5e-5`
On GPU VM, the test has to be run with TF32 disabled (or without TF32 support).
See: https://pytorch.org/docs/stable/notes/cuda.html
Related discussion: #16588
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17395/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17395",
"html_url": "https://github.com/huggingface/transformers/pull/17395",
"diff_url": "https://github.com/huggingface/transformers/pull/17395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17395.patch",
"merged_at": 1653470346000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17394/events
|
https://github.com/huggingface/transformers/issues/17394
| 1,246,536,553
|
I_kwDOCUB6oc5KTKNp
| 17,394
|
Inconsistency multiple mask in fill-mask
|
{
"login": "mo6zes",
"id": 10004251,
"node_id": "MDQ6VXNlcjEwMDA0MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mo6zes",
"html_url": "https://github.com/mo6zes",
"followers_url": "https://api.github.com/users/mo6zes/followers",
"following_url": "https://api.github.com/users/mo6zes/following{/other_user}",
"gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions",
"organizations_url": "https://api.github.com/users/mo6zes/orgs",
"repos_url": "https://api.github.com/users/mo6zes/repos",
"events_url": "https://api.github.com/users/mo6zes/events{/privacy}",
"received_events_url": "https://api.github.com/users/mo6zes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"nvm it seems deliberate https://github.com/huggingface/transformers/blob/374a2f693f75305eded1a2bb7a7e452f0ab33fad/src/transformers/pipelines/fill_mask.py#L137-L140"
] | 1,653
| 1,653
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.14.1
- Platform: Darwin-21.5.0-x86_64-i386-64bit
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
pipe = pipeline("fill-mask", top_k=1)
pipe("It is <mask>.")
=> [{..., 'sequence': 'It is true.'}]
pipe("It is <mask> <mask>.")
=> [[{..., 'sequence': '<s>It is very<mask>.</s>'}], [{..., 'sequence': '<s>It is<mask>orable.</s>'}]]
```
### Expected behavior
I would expect that the "sequence" does not include `<s>` and `</s>` tokens. It also seems to remove whitespace before the `<mask>` tokens left in the result, but I believe that does not make a difference for the tokenizer.
What I would expect:
```shell
pipe = pipeline("fill-mask", top_k=1)
pipe("It is <mask>.")
=> [{..., 'sequence': 'It is true.'}]
pipe("It is <mask><mask>.")
=> [[{..., 'sequence': 'It is very <mask>.'}], [{..., 'sequence': 'It is <mask>orable.'}]]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17394/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17393/events
|
https://github.com/huggingface/transformers/pull/17393
| 1,246,474,179
|
PR_kwDOCUB6oc44XCsT
| 17,393
|
Fx support for multiple model architectures
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@michaelbenayoun \r\n```\r\nimport inspect\r\n\r\nimport transformers.utils.fx as fx\r\nfrom transformers import *\r\n\r\nmodel = LayoutLMForMaskedLM(LayoutLMConfig())\r\n\r\ninput_names = model.dummy_inputs.keys()\r\nsig = inspect.signature(model.forward)\r\nconcrete_args = {p.name: p.default for p in sig.parameters.values() if p.name not in input_names}\r\n\r\nhf_tracer = fx.HFTracer()\r\n\r\nhf_tracer.trace(model, concrete_args=concrete_args)\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/pbelevich/PycharmProjects/PiPPy/test/hf_test3.py\", line 14, in <module>\r\n hf_tracer.trace(model, concrete_args=concrete_args)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 877, in trace\r\n self.graph = super().trace(root, concrete_args=concrete_args)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 587, in trace\r\n self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/layoutlm/modeling_layoutlm.py\", line 935, in forward\r\n outputs = self.layoutlm(\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 577, in module_call_wrapper\r\n return self.call_module(mod, forward, args, kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 834, in call_module\r\n return super().call_module(m, forward, args, kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 372, in call_module\r\n return forward(*args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 573, in forward\r\n return _orig_module_call(mod, *args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/layoutlm/modeling_layoutlm.py\", line 803, in forward\r\n bbox = torch.zeros(tuple(list(input_shape) + [4]), dtype=torch.long, device=device)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/proxy.py\", line 260, in __iter__\r\n return self.tracer.iter(self)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/proxy.py\", line 169, in iter\r\n raise TraceError('Proxy object cannot be iterated. This can be '\r\ntorch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors\r\n\r\nProcess finished with exit code 1\r\n```",
"@michaelbenayoun \r\n```\r\nimport inspect\r\n\r\nimport transformers.utils.fx as fx\r\nfrom transformers import *\r\n\r\nmodel = Speech2TextForConditionalGeneration(Speech2TextConfig())\r\n\r\ninput_names = model.dummy_inputs.keys()\r\nsig = inspect.signature(model.forward)\r\nconcrete_args = {p.name: p.default for p in sig.parameters.values() if p.name not in input_names}\r\n\r\nhf_tracer = fx.HFTracer()\r\n\r\nhf_tracer.trace(model, concrete_args=concrete_args)\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/pbelevich/PycharmProjects/PiPPy/test/hf_test4.py\", line 14, in <module>\r\n hf_tracer.trace(model, concrete_args=concrete_args)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 877, in trace\r\n self.graph = super().trace(root, concrete_args=concrete_args)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 587, in trace\r\n self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/speech_to_text/modeling_speech_to_text.py\", line 1349, in forward\r\n outputs = self.model(\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 577, in module_call_wrapper\r\n return self.call_module(mod, forward, args, kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 834, in call_module\r\n return super().call_module(m, forward, args, kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 372, in call_module\r\n return forward(*args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 573, in forward\r\n return _orig_module_call(mod, *args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/speech_to_text/modeling_speech_to_text.py\", line 1193, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 577, in module_call_wrapper\r\n return self.call_module(mod, forward, args, kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 834, in call_module\r\n return super().call_module(m, forward, args, kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 372, in call_module\r\n return forward(*args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 573, in forward\r\n return _orig_module_call(mod, *args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/speech_to_text/modeling_speech_to_text.py\", line 770, in forward\r\n inputs_embeds = self.conv(input_features)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 577, in module_call_wrapper\r\n return self.call_module(mod, forward, args, kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/utils/fx.py\", line 834, in call_module\r\n return super().call_module(m, forward, args, kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 372, in call_module\r\n return forward(*args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py\", line 573, in forward\r\n return _orig_module_call(mod, *args, **kwargs)\r\n File \"/Users/pbelevich/miniconda3/envs/PiPPy/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/pbelevich/PycharmProjects/pbelevich-transformers/src/transformers/models/speech_to_text/modeling_speech_to_text.py\", line 124, in forward\r\n hidden_states = input_features.transpose(1, 2).contiguous() # -> B x (C x D) x T\r\nAttributeError: 'NoneType' object has no attribute 'transpose'\r\n\r\nProcess finished with exit code 1\r\n```",
"@pbelevich Thanks for raising those issues!\r\n\r\n- About LayoutLM, I just pushed a fix that should solve the issue.\r\n- About Speech2Text, I don't think that this is an issue, it's just that the dummy inputs for this model are wrong... it creates `input_ids` but it should create something else since this model does not have `input_ids` as inputs... I added a check in the `symbolic_trace` function (and not `HFTracer.trace`), that will test if the `input_names` passed are correct for the model we want to trace."
] | 1,653
| 1,653
| 1,653
|
MEMBER
| null |
# What does this PR do?
This PR adds support torch.fx tracing for the following model architectures:
- BART
- mBART
- Marian
- M2M100
- Blenderbot
- Blenderbot Small
- Pegasus
- PLBart
- XGLM
- Speech2Text
- Speech2Text2
- OPT
- CLIP
- TrOCR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17393/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17393",
"html_url": "https://github.com/huggingface/transformers/pull/17393",
"diff_url": "https://github.com/huggingface/transformers/pull/17393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17393.patch",
"merged_at": 1653984176000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17392/events
|
https://github.com/huggingface/transformers/issues/17392
| 1,246,445,567
|
I_kwDOCUB6oc5KSz__
| 17,392
|
[Deepspeed alternative] PatrickStar
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@flozi00, \r\n\r\nThank you for the heads up about this framework.\r\n\r\nI read the overview https://github.com/Tencent/PatrickStar/blob/master/INSIDE.md and it looks like they are trying to solve problems that have been solved in Deepspeed many moons ago - perhaps they started working on this project long time ago and are referring to a really old deepspeed version? e.g. param offload has been implemented long time ago there. And of course CPU offload implements prefetching, which happens in parallel with compute.\r\n\r\nI will try to find time to read their paper: https://arxiv.org/abs/2108.05818 to understand what innovation it has proposed with the chunked memory management.\r\n\r\nBefore doing an integration probably the first good step would be to try to reproduce their benchmarks and the current deepspeed side by side to compare the performance and see if it's indeed offering an improvement.",
"To update - after you shared this a few weeks ago I've requested with the deepspeed devs to look into PatrickStar and see if they could Match the performance - that way we don't need to add a complicated support for another framework. Let's see what unfolds. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@flozi00 @stas00 Thanks for your attention on PatrickStar. \r\nI would like to clarify that the comparison results of PatrickStar and DeepSpeed is repeatly verified by many users.\r\nThe results in the paper are still valid in May 2022 with the latest version DeepSpeed at that time. \r\nAlso, the PatrickStar idea has already been integrated into ColossalAI and played a key role.\r\nI strongly believe that PatrickStar's design, or part of it, will inherently accelerate the training of large models. I'm open to any help with using Patrickstar, either integrated into DeepSpeed or huggingface transformers.",
"Thank you for your commentary and willingness to contribute, @feifeibear \r\n\r\n> I strongly believe that PatrickStar's design, or part of it, will inherently accelerate the training of large models. I'm open to any help with using Patrickstar, either integrated into DeepSpeed or huggingface transformers.\r\n\r\nI think it'd be amazing to have it integrated into Deepspeed. I'm tagging @tjruwase (Deepspeed) on this suggestion. Perhaps let's start a new DS-specific thread at https://github.com/microsoft/DeepSpeed/issues?\r\n\r\n> Also, the PatrickStar idea has already been integrated into ColossalAI and played a key role.\r\n\r\nIndeed, we are discussing the CAI integration here: https://github.com/huggingface/transformers/issues/18624\r\n",
"Thanks! I will keep an eye on the CAI issue! And feel free to contact me if I can help!"
] | 1,653
| 1,660
| 1,659
|
CONTRIBUTOR
| null |
### Feature request
https://github.com/Tencent/PatrickStar
Adding PatrickStar as alternative to deepspeed
### Motivation
I think it could be interesting to benchmark with deepspeed.
In their Readme they are writing it's faster then deepspeed zero 3.
But they are also writing, gradient accumulation is not possible with the library.
pinging @stas00 for interest ?
### Your contribution
I could give it a try integrating it into the trainer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17392/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/17392/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17391/events
|
https://github.com/huggingface/transformers/issues/17391
| 1,246,390,573
|
I_kwDOCUB6oc5KSmkt
| 17,391
|
AutoTokenizer _batch_encode_plus method don't have add_prefix_space argument
|
{
"login": "c00k1ez",
"id": 16941854,
"node_id": "MDQ6VXNlcjE2OTQxODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/16941854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c00k1ez",
"html_url": "https://github.com/c00k1ez",
"followers_url": "https://api.github.com/users/c00k1ez/followers",
"following_url": "https://api.github.com/users/c00k1ez/following{/other_user}",
"gists_url": "https://api.github.com/users/c00k1ez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c00k1ez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c00k1ez/subscriptions",
"organizations_url": "https://api.github.com/users/c00k1ez/orgs",
"repos_url": "https://api.github.com/users/c00k1ez/repos",
"events_url": "https://api.github.com/users/c00k1ez/events{/privacy}",
"received_events_url": "https://api.github.com/users/c00k1ez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @c00k1ez , \r\n\r\nI think I can guess a little confusion, this argument must be specified during the initialization of the tokenizer or redefined when using `from_pretrained` but it won't work in the `__call__` method. \r\n\r\nIf any documentation has misled you, I would be very grateful if you would share it with us! :pray: (and even better that you propose an improvement in PR :smile: )\r\n\r\nHere is the snippet which will give what you expect:\r\n```python\r\ntokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base', add_prefix_space=True)\r\ninput_ids = tokenizer('test string').data[\"input_ids\"]\r\n```",
"Thank you for your answer, it becomes much clearer!\r\nI think it seems great to add smth like \r\n```python\r\n>>> # Download vocabulary from huggingface.co and define model-specific arguments\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", add_prefix_space=True)\r\n``` \r\nto\r\n\r\nhttps://github.com/huggingface/transformers/blob/71e602725b90f63f404109bae9f72cbdf755477b/src/transformers/models/auto/tokenization_auto.py#L476\r\n\r\nWhat do u think?",
"It makes sense to me! Do you want to open a PR with this proposal?",
"Yeah, no problem",
"> Hi @c00k1ez ,\r\n> \r\n> I think I can guess a little confusion, this argument must be specified during the initialization of the tokenizer or redefined when using `from_pretrained` but it won't work in the `__call__` method.\r\n> \r\n> If any documentation has misled you, I would be very grateful if you would share it with us! 🙏 (and even better that you propose an improvement in PR 😄 )\r\n> \r\n> Here is the snippet which will give what you expect:\r\n> \r\n> ```python\r\n> tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base', add_prefix_space=True)\r\n> input_ids = tokenizer('test string').data[\"input_ids\"]\r\n> ```\r\n\r\nHi @SaulLu, some documentation which I found misleading in relation to this: https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.bad_words_ids(List[List[int]],\r\nHuggingface Docs > Transformers > Text Generation > GenerationConfig > Params > bad_words_ids\r\n\r\n\"**bad_words_ids**(List[List[int]], optional) — List of token ids that are not allowed to be generated. In order to get the token ids of the words that should not appear in the generated text, use tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids.\"\r\n\r\nHere, it seems like the docs are telling us to use add_prefix_space=True in the `__call__` method.",
"Thanks for your feedback! Let me ping @ArthurZucker who is now the person supervising the tokenizers in transformers."
] | 1,653
| 1,687
| 1,653
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@SaulLu @LysandreJik
Hi, im just noticed that `AutoTokenizer._batch_encode_plus` method don't have `add_prefix_space` argument if I init it as `roberta-base` model.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base')
input_ids = tokenizer('test string', add_prefix_space=True).data["input_ids"]
# Output: TypeError: _batch_encode_plus() got an unexpected keyword argument 'add_prefix_space'
```
### Expected behavior
```shell
tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base')
input_ids = tokenizer('test string', add_prefix_space=True).data["input_ids"]
# Output: >>> input_ids [0, 1296, 6755, 2]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17391/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17391/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17390/events
|
https://github.com/huggingface/transformers/issues/17390
| 1,246,196,872
|
I_kwDOCUB6oc5KR3SI
| 17,390
|
Allow creation of tokenizer from a vocab dictionary
|
{
"login": "itaihay",
"id": 3392524,
"node_id": "MDQ6VXNlcjMzOTI1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3392524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itaihay",
"html_url": "https://github.com/itaihay",
"followers_url": "https://api.github.com/users/itaihay/followers",
"following_url": "https://api.github.com/users/itaihay/following{/other_user}",
"gists_url": "https://api.github.com/users/itaihay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itaihay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itaihay/subscriptions",
"organizations_url": "https://api.github.com/users/itaihay/orgs",
"repos_url": "https://api.github.com/users/itaihay/repos",
"events_url": "https://api.github.com/users/itaihay/events{/privacy}",
"received_events_url": "https://api.github.com/users/itaihay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### Feature request
Tokenizers that need a vocab.json file are expected to be created in this manner:
`transformers.Wav2Vec2CTCTokenizer(vocab_file="path/to/vocab.json")`
With the file read with a `json.load` statement.
I'm suggesting an optional `vocab` parameter that could be passed instead.
```
vocab= {"a":0, "b":1,......}
tokenizer = transformers.Wav2Vec2CTCTokenizer(vocab=vocab)
# Would also be possible
transformers.Wav2Vec2CTCTokenizer(vocab_file="path/to/vocab.json")
````
### Motivation
Remove the necesity to clutter the disk with a vocab file and allow a dynamic vocab creation process
### Your contribution
I could implement this if it seems like a good addition
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17390/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17389/events
|
https://github.com/huggingface/transformers/issues/17389
| 1,246,131,414
|
I_kwDOCUB6oc5KRnTW
| 17,389
|
OPT-350M Throws Error On Load after Finetuning
|
{
"login": "Leli1024",
"id": 33652168,
"node_id": "MDQ6VXNlcjMzNjUyMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/33652168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leli1024",
"html_url": "https://github.com/Leli1024",
"followers_url": "https://api.github.com/users/Leli1024/followers",
"following_url": "https://api.github.com/users/Leli1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Leli1024/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leli1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leli1024/subscriptions",
"organizations_url": "https://api.github.com/users/Leli1024/orgs",
"repos_url": "https://api.github.com/users/Leli1024/repos",
"events_url": "https://api.github.com/users/Leli1024/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leli1024/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"facing same error, unable to load after finetuning. Any update ?",
"Ping @patrickvonplaten , but also cc @younesbelkada and @ArthurZucker .",
"On it 👍",
"@Leli1024 @omerarshad If you don't mind and have some time, maybe you can try with the latest dev build?\r\n\r\nIf you clone the repo, you can do it like `pip install --upgrade -e .[dev]`.\r\n(There are some minor fixes since then, I didn't check if they are related)",
"Not sure if it is related but It is possible that you have used a version of transformers before merging this PR #17225 ",
"> @Leli1024 @omerarshad If you don't mind and have some time, maybe you can try with the latest dev build?\r\n> \r\n> If you clone the repo, you can do it like `pip install --upgrade -e .[dev]`. (There are some minor fixes since then, I didn't check if they are related)\r\n\r\nThis totally worked thank you!!!\r\nAlso not to be pedantic but I needed to remove '[dev]' from the command to run it. Just thought I should let anyone else having trouble with it know",
"> > @Leli1024 @omerarshad If you don't mind and have some time, maybe you can try with the latest dev build?\r\n> > If you clone the repo, you can do it like `pip install --upgrade -e .[dev]`. (There are some minor fixes since then, I didn't check if they are related)\r\n> \r\n> This totally worked thank you!!!\r\n\r\nGreat!",
"So building from source worked? or is the patch released?",
"> So building from source worked? or is the patch released?\r\n\r\nBuilding from source",
"I'm experiencing this issue when I try to use the Inference API to test a `facebook/opt-350m` model fine tuned using transformers 4.19.3, 4.19.4, or 4.20.0, and even when I install directly from git like this:\r\n\r\n```sh\r\npython -m pip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nThe error I'm seeing is identical to above:\r\n\r\n> Error(s) in loading state_dict for OPTForCausalLM: size mismatch for lm_head.weight: copying a param with shape torch.Size([50272, 512]) from checkpoint, the shape in current model is torch.Size([50272, 1024]).\r\n\r\nIf I download the model to my machine and run it using a pipeline, then it works - it just seems to be an issue for the Inference API.\r\n\r\nHere are the package versions I'm using:\r\n\r\n- Transformers 4.20.0\r\n- Pytorch 1.11.0+cu102\r\n- Datasets 2.2.2\r\n- Tokenizers 0.12.1",
"Hey, could you provide an example script to help us reproduce the error? ",
"This seems to be able to reproduce it for me:\r\n\r\n```python\r\nimport pathlib\r\n\r\nfrom datasets import DatasetDict\r\nfrom transformers import (\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n default_data_collator,\r\n Trainer,\r\n TrainingArguments,\r\n)\r\n\r\nHUGGINGFACE_API_KEY = \"...\"\r\n\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-350m\")\r\n model = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\")\r\n\r\n training_args = TrainingArguments(\r\n output_dir=\"/tmp/model\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n push_to_hub=True,\r\n hub_strategy=\"end\",\r\n hub_model_id=\"17389\",\r\n hub_token=HUGGINGFACE_API_KEY,\r\n )\r\n\r\n path = pathlib.Path(\"/tmp/data/dataset.txt\")\r\n path.parent.mkdir(exist_ok=True)\r\n with path.open(\"w\") as fp:\r\n for _ in range(10):\r\n fp.write(\"Hello, world\\n\")\r\n\r\n def encode(batch):\r\n encodings = tokenizer(batch[\"text\"], padding=\"max_length\", truncation=True)\r\n encodings[\"labels\"] = encodings[\"input_ids\"].copy()\r\n return encodings\r\n\r\n dataset = DatasetDict.from_text(\r\n {\"train\": path.as_posix(), \"validation\": path.as_posix()}\r\n ).map(\r\n encode,\r\n remove_columns=\"text\",\r\n )\r\n\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=dataset[\"train\"],\r\n eval_dataset=dataset[\"validation\"],\r\n data_collator=default_data_collator,\r\n )\r\n trainer.train()\r\n trainer.save_model()\r\n\r\n```\r\n\r\nJust ran this on my machine and the resulting model is here: https://huggingface.co/dhorgan/17389",
"Hi @ArthurZucker, have you had any luck with this? I tried running the example code above again today with v4.20.1 after #17785 was merged, but nothing seems to have changed. The new model is here, if you're interested: https://huggingface.co/dhorgan/17389-test-fix",
"Hey! Yeah I know where the bug is from! The inference API is not up to date with the main branch of transformers! @Narsil is the one handling that but he is in holiday! Gotta wait for a bit 😀\n",
"Hi @donaghhorgan ,\r\n\r\nYou are not including the `tokenizer` in your `Trainer` so it is **not** saved in your model: https://huggingface.co/dhorgan/17389-test-fix/tree/main\r\n\r\nYou can fix this by simply doing `tokenizer.save_pretrained('....')` and uploading it or doing `Trainer(tokenizer=tokenizer)` (I think, I don't use `Trainer` that often personnally but I have seen that being suggested and working).\r\n\r\nAnyhow, you can check the failure by doing.\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"dhorgan/17389-test-fix\")\r\n```\r\nIt should crash (becuase no tokenizer files are there)",
"That's great, thanks @Narsil! It's all working for me here now."
] | 1,653
| 1,657
| 1,653
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.0
- Platform: macOS-12.3.1-arm64-i386-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.10.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## 🐛 Bug
When the OPT-350M variant is fine-tuned via huggingface, the resulting model will give the following error when loaded
```
model = OPTForCausalLM.from_pretrained(model path)
RuntimeError: Error(s) in loading state_dict for OPTForCausalLM:
size mismatch for lm_head.weight: copying a param with shape torch.Size([50272, 512]) from checkpoint, the shape in current model is torch.Size([50272, 1024]).
```
##Code to load model
```
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed, OPTForCausalLM
import torch
def generate_text(model, tokenizer, prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
texts = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return texts
path = "facebook/opt-350m"
path = "opt/model_ckpts"
model = OPTForCausalLM.from_pretrained(path)
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
prompt = "The woman worked as a"
print(generate_text(model, tokenizer, prompt))
```
##Training Code
```
import torch as th
from dataset import get_examples, GSMDataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import GPT2Config, AdamW
from transformers import get_scheduler
from tqdm.auto import tqdm
from torch.utils.data import DataLoader
from transformers import AutoModelForCausalLM, AutoTokenizer, OPTModel, OPTConfig, OPTForCausalLM
import torch
model = OPTForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False)
try:
model = OPTForCausalLM.from_pretrained("model_ckpts")
print("model loaded")
except Exception as e:
print(e)
train_examples = get_examples("train")
train_dset = GSMDataset(tokenizer, train_examples)
device = th.device("cuda")
model.to(device)
model.train()
train_loader = DataLoader(train_dset, batch_size=4, shuffle=True)
optim = AdamW(model.parameters(), lr=1e-5)
num_epochs = 10
num_training_steps = num_epochs * len(train_loader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optim,
num_warmup_steps=0,
num_training_steps=num_training_steps,
)
pbar = tqdm(range(num_training_steps))
for epoch in range(num_epochs):
for batch in train_loader:
optim.zero_grad()
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch, labels=batch["input_ids"])
loss = outputs[0]
loss.backward()
optim.step()
lr_scheduler.step()
pbar.update(1)
pbar.set_description(f"train_loss: {loss.item():.5f}")
model.save_pretrained("model_ckpts/")
```
##Dataset module
```import json
import os
import re
import torch as th
def read_jsonl(path: str):
with open(path) as fh:
return [json.loads(line) for line in fh.readlines() if line]
def get_examples(split):
path = os.path.join("data/", f"{split}.jsonl")
examples = read_jsonl(path)
#examples = examples[0:100]
for ex in examples:
ex.update(question=ex["question"] + "\n")
ex.update(answer=ex["answer"] + "<|endoftext|>")
print(f"{len(examples)} {split} examples")
return examples
ANS_RE = re.compile(r"#### (\-?[0-9\.\,]+)")
INVALID_ANS = "[invalid]"
def extract_answer(completion):
match = ANS_RE.search(completion)
if match:
match_str = match.group(1).strip()
match_str = match_str.replace(",", "")
return match_str
else:
return INVALID_ANS
def is_correct(model_completion, gt_example):
gt_answer = extract_answer(gt_example["answer"])
assert gt_answer != INVALID_ANS
return extract_answer(model_completion) == gt_answer
class GSMDataset(th.utils.data.Dataset):
def __init__(self, tokenizer, examples, loss_on_prefix=True):
self.examples = examples
self.qns = [ex["question"] for ex in self.examples]
self.ans = [ex["answer"] for ex in self.examples]
self.qns = tokenizer(self.qns, padding=False)
self.ans = tokenizer(self.ans, padding=False)
self.loss_on_prefix = loss_on_prefix
self.max_len = max(
[
len(self.qns["input_ids"][i]) + len(self.ans["input_ids"][i])
for i in range(len(self.examples))
]
)
print(f"Max tokens: {self.max_len}")
def __len__(self):
return len(self.examples)
def __getitem__(self, idx):
qn_tokens = self.qns["input_ids"][idx]
ans_tokens = self.ans["input_ids"][idx]
pad_tokens = [0] * (self.max_len - len(qn_tokens) - len(ans_tokens))
tokens = qn_tokens + ans_tokens + pad_tokens
mask = (
([int(self.loss_on_prefix)] * len(qn_tokens))
+ ([1] * len(ans_tokens))
+ ([0] * len(pad_tokens))
)
tokens = th.tensor(tokens)
mask = th.tensor(mask)
return dict(input_ids=tokens, attention_mask=mask)```
### Expected behavior
```shell
Expected model to load
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17389/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17388/events
|
https://github.com/huggingface/transformers/pull/17388
| 1,246,126,255
|
PR_kwDOCUB6oc44V9J6
| 17,388
|
Opt in flax and tf
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should we close the other PR? Let me know once it's ready for a review :-)",
"Superseeds https://github.com/huggingface/transformers/pull/17227 and https://github.com/huggingface/transformers/pull/17226",
"Cool, very nice job @ArthurZucker ! \r\n\r\nCould you as a final safety guard also add TFOPT and FlaxOPT to the documentation test suite? \r\n\r\nSee: https://github.com/huggingface/transformers/tree/main/docs#docstring-testing",
"Can I merge @LysandreJik @sgugger ? (failing test are not related to OPT) ",
"@patil-suraj could you quickly check Flax and maybe @gante go over TF OPT?",
"Thanks all for the reviews 😄 🥳 "
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
Adds support for OPT in both Flax and TF
## Who can review?
@patrickvonplaten, @LysandreJik @younesbelkada @patil-suraj @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17388/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17388/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17388",
"html_url": "https://github.com/huggingface/transformers/pull/17388",
"diff_url": "https://github.com/huggingface/transformers/pull/17388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17388.patch",
"merged_at": 1654015282000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17387
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17387/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17387/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17387/events
|
https://github.com/huggingface/transformers/pull/17387
| 1,245,923,550
|
PR_kwDOCUB6oc44VSfv
| 17,387
|
Add Google's Trillson Audio Classification Model
|
{
"login": "vumichien",
"id": 31467068,
"node_id": "MDQ6VXNlcjMxNDY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/31467068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vumichien",
"html_url": "https://github.com/vumichien",
"followers_url": "https://api.github.com/users/vumichien/followers",
"following_url": "https://api.github.com/users/vumichien/following{/other_user}",
"gists_url": "https://api.github.com/users/vumichien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vumichien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vumichien/subscriptions",
"organizations_url": "https://api.github.com/users/vumichien/orgs",
"repos_url": "https://api.github.com/users/vumichien/repos",
"events_url": "https://api.github.com/users/vumichien/events{/privacy}",
"received_events_url": "https://api.github.com/users/vumichien/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"Still WIP (maybe we can link the google colab here if you want @vumichien :-) )",
"Thanks @patrickvonplaten, This is the current link of colab notebook I am working on https://colab.research.google.com/drive/1lFGDVNgtXXyuvM4J-pBmsPj43LW9VlpQ#scrollTo=t-icrfOL60dT to load all TF weights of trillsson3 into Efficientnetv2bS PyTorch model\r\n\r\nCurrent process:\r\n- Check the output of each layer between the original TF and PT model\r\n",
"Hey @vumichien,\r\n\r\nSorry to have dropped the ball here. I will be off for the two weeks ahead, but will take a look at the PR once I'm back!",
"Hi @patrickvonplaten,\r\nI am swamped finishing my job this month before the summer holiday. \r\nI just want to update the current process, I have successfully loaded the TF trillson weight to PyTorch model (you can check in the notebook here https://colab.research.google.com/drive/1lFGDVNgtXXyuvM4J-pBmsPj43LW9VlpQ#scrollTo=M4k3zhZcypUH)\r\nIn the next step, I think I will write the script in the correct format of the Transformers library. However if possible, could you describe in detail or give necessary notes for the next step?",
"Very cool @vumichien ! \r\nI'd suggest to copy-paste the PyTorch model and preprocessing as defined in your colab to `modeling_trillson_efficient.py` in Transformers and in a first step add a couple slow integration tests to make sure that when refactoring your code the outputs stay correct. \r\nMore specifically, it'd be very useful to add a couple of those tests: https://github.com/huggingface/transformers/blob/e54a1b49aa6268c484625c6374f952f318914743/tests/models/bert/test_modeling_bert.py#L585 (maybe 2 or 3 given that the model is relatively complex)\r\nOnce this works we have a safety mechanism that we can always fall back to that ensures the model works correctly.\r\n\r\nThen the refactor starts. \r\n- 1) Move the pre-processing out into a new file `feature_extraction_trillson_efficient.py` and make the model code pretty.\r\nMaking the code pretty means that it should adhere to Transformers style (e.g. the model should inherit from a `PretrainedModel` class, the naming should be similar to, e.g. BERT.) You can find more information about this here: https://huggingface.co/docs/transformers/add_new_model#stepbystep-recipe-to-add-a-model-to-transformers\r\n- 2) Then the next step is to improve the weight names and to write a conversion script that automatically converts the old checkpoint names to the new PyTorch ones (Point 6. here: https://huggingface.co/docs/transformers/add_new_model#stepbystep-recipe-to-add-a-model-to-transformers) \r\n- 3) Once that works it's again a bit more refactoring and then we're good to go #18511 \r\n- 4) If you want I can also allocate you with a GPU to run some fine-tuning experiments\r\n\r\nOverall, really amazing work & really sorry that I was so unavailable to help you. But now that the most difficult part is solved I think the rest is easy :-) \r\nGenerally, I'd suggest to just adapt this PR to include the newest version of your model + tests and then I'm more than happy to also directly comment in the PR\r\n\r\n",
"Hi @patrickvonplaten, I just upload the new version of Trillsson model, would you mind checking on it?",
"> Hi @patrickvonplaten, I just upload the new version of Trillsson model, would you mind checking on it?\r\n\r\nVery cool! Does it work just like the original model? In this case should we try fine-tuning it on emotion recognition? ",
"I have checked the embedding outputs, they are the same as the TF model in the original repo. I think we could try fine-tuning with emotion recognition by adding one dense classifier layer on top of embedding outputs. Do you have any suggestions which dataset we could use for fine-tuning? \r\nCan I leverage the Trainer for the fine-tuning? I don't know whether the current version code of Trillsson_efficient is good enough to use with Trainer.",
"Hey @vumichien,\r\n\r\nThat's a very nice idea! Could we try to set up this fine-tuning example for key word spotting with trillson?\r\nhttps://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu\r\n\r\nI think you only need to add one head layer for it. \r\nLet me know if you need access to a GPU (think google colab pro should be enough though) or help with how to start training! Think you can just follow the example :-) ",
"I have started fine-tuning the classifier Trillsson model, it works on the CPU. However, when I tried to fine-tune on GPU, it threw the error `RuntimeError: CUDA error: unspecified launch failure`. I have checked the whole process, I think it's because of the data loader but it's hard to debug. Do you have any experience in dealing with this error? This is the script I used to fine-tune https://github.com/vumichien/transformers/blob/add_trillson_effecient/src/transformers/models/trillsson_efficient/run_audio_classification.py",
"Hmm usually datasets should not be related in any way to the GPU. Are you fine-tuning on Google Colab? We could try to debug together on Google colab :-) ",
"@patrickvonplaten \r\nI made the notebook in Colab for fine-tuning Trillsson models here https://colab.research.google.com/drive/1q51cxmpa_MtCd6LG6Jj4rLrvFIyj9uDq#scrollTo=8qcyuhPkShQd. It would be great if you have time to check it.",
"Hey @vumichien,\r\n\r\nThe notebook looks very nice! Does it work? Can you train it in a google colab? Otherwise happy to give you access to a GPU for a week if you'd like :-) ",
"Sorry, It still doesn't work on GPU. When I try to fine-tune the model on GPU it throws the error `RuntimeError: CUDA error: unspecified launch failure`. It works on CPU and shows the warning `E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NOT_INITIALIZED: initialization error` but I didn't figure out where it came from yet.",
"This is the result after training 5 epochs on CPU, the eval loss reduces but eval accuracy doesn't change as much. I think we should do more experiments with different hyperparameters like learning_rate and batch_size https://huggingface.co/vumichien/trillsson3-ft-keyword-spotting",
"Hi @patrickvonplaten \r\nI think I can solve the problem of training with GPU. The problem is with the dataloader multiprocess (the error is CUDA cannot be initialized), CUDA is initialized beforehand so the dataloader cannot reinitialize CUDA in multiprocess (I think because the TensorFlow code part did it) so I have to set the dataloader_num_workers to 0 to solve this problem. \r\nThe result is this (https://huggingface.co/vumichien/trillsson3-ft-keyword-spotting-6/tensorboard). The eval accuracy is low (0.62) and unchanged after training with 20 epochs. I think training with Colab is good enough and I will try with other learning rates and max_length_seconds.\r\nDo you have any suggestions? ",
"Very cool that training now works! Hmm, not 100% sure what could help the training here - the training curves looks very smooth. Gently nudging/pinging the author here (cc @joel-shor) - any ideas what could help the training? Do the training curves look reasonable?",
"I have found and fixed the bug when transposing the shape of the input audio array after preprocessing. Now the results look very good, we could achieve around 91% accuracy after 5 epochs. I have done several training experiments, and the results are here https://wandb.ai/chienvu/trillsson-finetune-emotion. ",
"Hey @vumichien,\r\n\r\nGreat job - that's very cool! I think we can now just make the tests green and merge the model then :-) \r\nAlso cc @ArthurZucker @sanchit-gandhi could you maybe help @vumichien with the final steps to merge this PR? \r\n\r\nIf possible it would be great if we could add an example code to https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md that showcases how to train the trillson audio classification model for emotion recognition! >90% is def better than any model we have so far so we should feature this prominently :-) ",
"Hey @sanchit-gandhi @ArthurZucker. Thank you very much for helping me.\r\nHowever, when I try to use the feature extractor code from Whisper, the results are not the same.\r\n\r\nThe first one is mel_filter from [Whisper](https://github.com/huggingface/transformers/blob/0d4c45c585fadd0d9339061feda0f22fce04c57d/src/transformers/models/whisper/feature_extraction_whisper.py#L86)\r\nand [Trillsson](https://github.com/vumichien/transformers/blob/caa21806add107b6e8acab737a8d35ce74a6e0cf/src/transformers/models/trillsson_efficient/feature_extraction_trillsson_efficient.py#L166-L172) are not the same. I have tried with sr=16000, n_mels=80, max_mel=7500.0, and min_mel=125.0. I think I should implement the original code from Tensorflow in Numpy (pls correct me if my assumption is wrong)\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/359c3cdfc5fabac82b3c70b3b6de2b0a8c16874f/tensorflow/python/ops/signal/mel_ops.py#L89-L215\r\n\r\nThe second one is the STFT features from [Whisper](https://github.com/huggingface/transformers/blob/0d4c45c585fadd0d9339061feda0f22fce04c57d/src/transformers/models/whisper/feature_extraction_whisper.py#L207) and from [Trillsson](https://github.com/vumichien/transformers/blob/caa21806add107b6e8acab737a8d35ce74a6e0cf/src/transformers/models/trillsson_efficient/feature_extraction_trillsson_efficient.py#L157) are also not the same. I have tried with n_fft = 512, sampling_rate = 16000, n_mels=80, and window_length = 400, hop_length =160, the shape output from Whisper is (400, 257) compared to TF is (398, 257). \r\n\r\n",
"Hey @vumichien! If the implementations of the Mel-filter and STFT are _inherently_ different between the NumPy Whisper code and the Tensorflow Trillsson code, then you are entirely correct in that we should re-implement it. Have you managed to determine _where_ in the code the two implementations deviate? Perhaps if you pin it down we could assert whether it's a configuration issue, or inherent implementation difference. If it's the former, we won't have to re-implement. If it's the latter, we will!\r\n\r\nMaybe one other thing you could try is using the PyTorch implementation of the log-Mel feature extractor from Speech2Text: https://github.com/huggingface/transformers/blob/c3a93d8d821bc1df3601ba858e7385eada8db3a5/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py#L80\r\nIf we've got a PyTorch model, having a PyTorch dependency in the feature extractor is ok IMO. However, we'll have to re-implement it in NumPy though should someone wish to add the TF/JAX port of Trillsson later down the line.",
"BTW since I took care of the Mel and STFT with whisper, I will try having a look 🤗",
"Maybe @ArthurZucker you could share the reasoning behind why you opted for a **NumPy** feature extractor for Whisper rather than a **PyTorch** one as we have in Speech2Text? We can then decide whether we need a NumPy / PyTorch one for Trillsson. IMO a NumPy one is more \"future proof\" should we wish to add TF / JAX implementations of Trillsson, and consequently this would be my preference.",
"Hey @ArthurZucker! If you get the chance it would be awesome to hear your thoughts with regards to a PyTorch feature extractor (see above 👆)",
"Hey! Sorry for the late reply! \nIt's mostly to remove the dependency on `PyTorch` as we were sure that the `tf` model would be implemented. \n\nI guess that it's not that hard of a constraint. I'll have a look ! Sorry for the long wait 🙇",
"Great! So I guess it boils down to the question of whether we anticipate this model will be added in TF or Flax? I probably stand-by my previously stated preference from NumPy in this regard - given the performance of this model for audio classification, I don't see any reason why this model won't be added in either of the other frameworks in due course!",
"Hey @vumichien,\r\n\r\nI'm going back on my previous comment! My advice would be to try using the PyTorch feature extractor from speech-to-text:\r\n\r\nhttps://github.com/huggingface/transformers/blob/cbbeca3d1733aa7c9b443af5ff231a5affcd8a1e/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py#L33\r\n\r\nIf this PyTorch feature extractor matches the current TF one, we can just use a PyTorch feature extractor for purpose of this PR. If in due course we add the TF or Flax model, we can switch it out for a NumPy one. This should be much faster for you than implementing in NumPy from scratch.\r\n\r\nIf, however, the PyTorch feature extractor does not match the current TF one, we can implement the feature extractor in NumPy from the get-go with the help of @ArthurZucker!\r\n\r\nYou can check quickly if the implementations match just by importing the Speech2TextFeatureExtractor:\r\n```python\r\nfrom transformers import Speech2TextFeatureExtractor\r\n\r\nfeature_extractor = Speech2TextFeatureExtractor.from_pretrained(\"facebook/s2t-small-librispeech-asr\")\r\n\r\ninputs = ...\r\n\r\ninput_features = feature_extractor(inputs).input_features[0]\r\n```\r\nthen check `input_features` against your current TF implementation",
"Hi @sanchit-gandhi, thank you for your advice.\r\nI will test PyTorch feature extractor from speech-to-text",
"Hey @sanchit-gandhi @ArthurZucker \r\nI have tested the Speech2TextFeatureExtractor and the output is not the same as the Trillsson TF feature extractor (it's much closer to WhisperFeatureExtractor but we need to modify it a bit)\r\nIn order to implement the Trillsson TF feature extractor in NumPy, we need to implement two functions: [tf.signal.linear_to_mel_weight_matrix](https://www.tensorflow.org/api_docs/python/tf/signal/linear_to_mel_weight_matrix) (This function follows the [Hidden Markov Model Toolkit (HTK)](http://htk.eng.cam.ac.uk/) convention) and [tf.signal.stft](https://www.tensorflow.org/api_docs/python/tf/signal/stft). It's not a hard task but it takes time to check everything carefully.",
"Okay! Tell me if I can be of help or if you are stuck! "
] | 1,653
| 1,676
| null |
CONTRIBUTOR
| null |
# What does this PR do?
Add Google's Trillson Audio Classification Model #17339
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17387/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17387/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17387",
"html_url": "https://github.com/huggingface/transformers/pull/17387",
"diff_url": "https://github.com/huggingface/transformers/pull/17387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17387.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17386
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17386/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17386/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17386/events
|
https://github.com/huggingface/transformers/pull/17386
| 1,245,810,037
|
PR_kwDOCUB6oc44U63g
| 17,386
|
Add FP16 Support for SageMaker Model Parallel
|
{
"login": "haohanchen-aws",
"id": 54413235,
"node_id": "MDQ6VXNlcjU0NDEzMjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/54413235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haohanchen-aws",
"html_url": "https://github.com/haohanchen-aws",
"followers_url": "https://api.github.com/users/haohanchen-aws/followers",
"following_url": "https://api.github.com/users/haohanchen-aws/following{/other_user}",
"gists_url": "https://api.github.com/users/haohanchen-aws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haohanchen-aws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haohanchen-aws/subscriptions",
"organizations_url": "https://api.github.com/users/haohanchen-aws/orgs",
"repos_url": "https://api.github.com/users/haohanchen-aws/repos",
"events_url": "https://api.github.com/users/haohanchen-aws/events{/privacy}",
"received_events_url": "https://api.github.com/users/haohanchen-aws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@philschmid Please review or suggest review, thanks!",
"Can you just run `make style` on your branch to fix the code quality issue? Thanks!"
] | 1,653
| 1,655
| 1,655
|
CONTRIBUTOR
| null |
(This PR is still pending some changes/fixes)
This PR adds support for SageMaker Model Parallel with FP16.
- To Enable fp16 with SMMP, user needs to add `fp16: True` to `SM_HP_MP_PARAMETERS`, when there's mismatch between `SM_HP_MP_PARAMETERS` and trainer args, a warning log would be printed and `SM_HP_MP_PARAMETERS` will be used as truth.
- Remove amp related stuff for SMMP as SMMP manages it's own half precision.
- Added `clip_master_grads` for grad clipping
- Some minor fixes
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17386/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17386",
"html_url": "https://github.com/huggingface/transformers/pull/17386",
"diff_url": "https://github.com/huggingface/transformers/pull/17386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17386.patch",
"merged_at": 1655142326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17385
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17385/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17385/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17385/events
|
https://github.com/huggingface/transformers/issues/17385
| 1,245,697,352
|
I_kwDOCUB6oc5KP9VI
| 17,385
|
Same sequence gets different token probabilities depending on whether it's generated from sampling or beam search
|
{
"login": "hacobe",
"id": 91226467,
"node_id": "MDQ6VXNlcjkxMjI2NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hacobe",
"html_url": "https://github.com/hacobe",
"followers_url": "https://api.github.com/users/hacobe/followers",
"following_url": "https://api.github.com/users/hacobe/following{/other_user}",
"gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hacobe/subscriptions",
"organizations_url": "https://api.github.com/users/hacobe/orgs",
"repos_url": "https://api.github.com/users/hacobe/repos",
"events_url": "https://api.github.com/users/hacobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hacobe/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Heey @hacobe,\r\n\r\nI'm not sure this is a bug. Note that beam_search is very much different from sample, see: https://huggingface.co/blog/how-to-generate\r\n\r\nWhy should they give the same probabilities?",
"Hi @patrickvonplaten,\r\n\r\nThe probability for a token is the output of the softmax layer for that token conditioning on the sequence generated so far. The way we generate a sequence changes if we use beam search instead of sampling, but the definition of the probability does not change. The probability is a function of the model you're using and the sequence generated so far. If you're using the same model and you've happened to generate the same sequence (and hyperparameters like length penalty are the same), then the probabilities should be the same.\r\n\r\nTake the example under the \"Beam search\" heading in that blog post. The top beam search sequence is (\"The\", \"dog\", \"has\"). It starts with the prompt \"The\". The token \"dog\" has a probability of 0.4 conditional on the sequence (\"The\"). The token \"has\" has a probability of 0.9 conditional on the sequence (\"The\", \"dog\").\r\n\r\nNow suppose we sample from the same model starting with the prompt \"The\". What is the probability of the selecting the token \"dog\" at this step? It still has a probability of 0.4, because we're using the same model and conditioning on the same sequence (\"The\"). Suppose we happen to sample \"dog\". Then the sequence is (\"The\", \"dog\"). What is the probability of selecting the token \"has\" at this step? Again, the token \"has\" will have a probability of 0.9, because we're using the same model and conditioning on the same sequence (\"The\", \"dog\").\r\n\r\nBy the way, thanks for all your work on the token probabilities! I think it's an important feature (both OpenAI and fairseq return token probabilities in their APIs). It's a key input for uncertainty estimation and error detection.",
"Hey @hacobe,\r\n\r\nNote that for beam search we sample from `current_beam_scores + log_prob_of_token` whereas for sampling we just sample from prob_of_token.\r\n\r\nFor beam search see here: \r\nhttps://github.com/huggingface/transformers/blob/5c17918fe4cda80dae5b7ec8f0b2d23a813c4a05/src/transformers/generation_utils.py#L2225\r\n\r\nFor sampling see here: \r\nhttps://github.com/huggingface/transformers/blob/5c17918fe4cda80dae5b7ec8f0b2d23a813c4a05/src/transformers/generation_utils.py#L1974",
"Hi @patrickvonplaten,\r\n\r\nBy \"for beam search we sample from `current_beam_scores + log_prob_of_token`\", do you mean that beam search selects sequences based on `current_beam_scores + log_prob_of_token`? Beam search is deterministic. It does not involve sampling. `current_beam_scores + log_prob_of_token` is the log sequence probability used to select the top k sequences at each step in beam search. I'm comparing the transition beam scores given by `compute_transition_beam_scores`, which I interpret as the log token (not sequence) probabilities, to the log token (not sequence) probabilities from sampling. If you think I'm still missing something, I can dig into the code when I get some time. Thanks for your help!",
"Hi @patrickvonplaten,\r\nI didn't realize top_k is set to 50 by default. When I change top_k = 0, then I get the same probabilities as expected."
] | 1,653
| 1,654
| 1,654
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:No
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I first generate a sequence using beam search with T5. I then generate a sequence using sampling set to a particular seed that happens to get the same sequence as the beam search decoding. I then calculate the token probabilities for both sequences and find that they differ despite being the same sequences.
This colab reproduces the behavior:
https://colab.research.google.com/drive/1vLmUfqYdKVo1z2Ztv2V2sQ29nDCYNbFK?usp=sharing
### Expected behavior
```shell
A sequence generated from a model using beam search and the same sequence generated from the same model from sampling should have the same token probabilities provided that they have the same hyperparameters (e.g., the length penalty should be the same for both).
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17385/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17384
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17384/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17384/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17384/events
|
https://github.com/huggingface/transformers/pull/17384
| 1,245,629,935
|
PR_kwDOCUB6oc44UUDT
| 17,384
|
Print more library versions in CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@stas00 OK for me to use a script, on it!",
"@stas00 \r\n\r\nJust pushed a quick version. It is now named `utils/print_env.py`, which tries to print information if a library could be imported.\r\nProbably need a better job name than `GPU visibility`, but it shows something like\r\n\r\n<img width=\"512\" alt=\"Screenshot 2022-05-24 215339\" src=\"https://user-images.githubusercontent.com/2521628/170120952-2be2f4d4-9061-43a4-9d51-8ebdb92650b2.png\">\r\n\r\n",
"Looks great, @ydshieh \r\n\r\nPerhaps let's just turn warnings off to keep the SNR high?"
] | 1,653
| 1,654
| 1,654
|
COLLABORATOR
| null |
# What does this PR do?
- Print more library versions in CI jobs
- `transformers`, `PyTorch`, `TensorFlow`, `DeepSpeed`, etc.
- easier to access without the need to open several tabs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17384/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17384",
"html_url": "https://github.com/huggingface/transformers/pull/17384",
"diff_url": "https://github.com/huggingface/transformers/pull/17384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17384.patch",
"merged_at": 1654158256000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17383
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17383/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17383/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17383/events
|
https://github.com/huggingface/transformers/pull/17383
| 1,245,629,786
|
PR_kwDOCUB6oc44UUBR
| 17,383
|
Add FP16 Support for SageMaker Model Parallel
|
{
"login": "haohanchen-aws",
"id": 54413235,
"node_id": "MDQ6VXNlcjU0NDEzMjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/54413235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haohanchen-aws",
"html_url": "https://github.com/haohanchen-aws",
"followers_url": "https://api.github.com/users/haohanchen-aws/followers",
"following_url": "https://api.github.com/users/haohanchen-aws/following{/other_user}",
"gists_url": "https://api.github.com/users/haohanchen-aws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haohanchen-aws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haohanchen-aws/subscriptions",
"organizations_url": "https://api.github.com/users/haohanchen-aws/orgs",
"repos_url": "https://api.github.com/users/haohanchen-aws/repos",
"events_url": "https://api.github.com/users/haohanchen-aws/events{/privacy}",
"received_events_url": "https://api.github.com/users/haohanchen-aws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
(This PR is still pending some changes/fixes)
This PR adds support for SageMaker Model Parallel with FP16.
- To Enable fp16 with SMMP, user needs to add `fp16: True` to `SM_HP_MP_PARAMETERS`, when there's mismatch between `SM_HP_MP_PARAMETERS` and trainer args, a warning log would be printed and `SM_HP_MP_PARAMETERS` will be used as truth.
- Only do `uncale_` for `pp_rank` 0 in the beginning, as `scaler._scale `would be None at first for `pp_rank` > 0.
- Added `clip_master_grads` for grad clipping
- Some minor fixes
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17383/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17383",
"html_url": "https://github.com/huggingface/transformers/pull/17383",
"diff_url": "https://github.com/huggingface/transformers/pull/17383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17383.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17382
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17382/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17382/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17382/events
|
https://github.com/huggingface/transformers/pull/17382
| 1,245,602,117
|
PR_kwDOCUB6oc44UOKO
| 17,382
|
Add support for `device_map="auto"` to OPT
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17382). All of your documentation changes will be reflected on that endpoint."
] | 1,653
| 1,653
| 1,653
|
COLLABORATOR
| null |
# What does this PR do?
I forgot to have OPT in the initial list of models supporting `device_map="auto"` (along side GPT-J, GPT-2 and T5). This PR takes care of it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17382/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17382",
"html_url": "https://github.com/huggingface/transformers/pull/17382",
"diff_url": "https://github.com/huggingface/transformers/pull/17382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17382.patch",
"merged_at": 1653333952000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17381
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17381/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17381/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17381/events
|
https://github.com/huggingface/transformers/pull/17381
| 1,245,203,917
|
PR_kwDOCUB6oc44S4VF
| 17,381
|
Fix Comet ML integration
|
{
"login": "mxschmdt",
"id": 4904985,
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxschmdt",
"html_url": "https://github.com/mxschmdt",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes an issue where the callback function `on_train_end` crashed if Comet ML integration was used but `experiment` was `None` after training (e.g. because the environment variable `COMET_MODE` was set to `DISABLE`).
Python snippet for testing (crashes before fix is applied):
```python
import os
from datasets import Dataset
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, Trainer, TrainingArguments
# disable comet_ml logging
os.environ['COMET_MODE'] = 'DISABLE'
# create dummy dataset for training
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
inputs = tokenizer(["Who likes Pizza?"], ["Dave likes pizza"])
inputs['start_positions'] = [[6]]
inputs['end_positions'] = [[6]]
dataset = Dataset.from_dict(inputs)
# create trainer
trainer = Trainer(model=AutoModelForQuestionAnswering.from_pretrained('bert-base-uncased'), args=TrainingArguments(output_dir='tmp', max_steps=1), train_dataset=dataset)
trainer.train()
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17381/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17381",
"html_url": "https://github.com/huggingface/transformers/pull/17381",
"diff_url": "https://github.com/huggingface/transformers/pull/17381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17381.patch",
"merged_at": 1653316990000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17380
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17380/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17380/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17380/events
|
https://github.com/huggingface/transformers/pull/17380
| 1,244,873,655
|
PR_kwDOCUB6oc44Rws9
| 17,380
|
Clean up CLIP tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR cleans up some tests of CLIP. See #17024 for more info.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17380/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17380",
"html_url": "https://github.com/huggingface/transformers/pull/17380",
"diff_url": "https://github.com/huggingface/transformers/pull/17380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17380.patch",
"merged_at": 1653396686000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17379
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17379/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17379/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17379/events
|
https://github.com/huggingface/transformers/pull/17379
| 1,244,673,276
|
PR_kwDOCUB6oc44RF7Z
| 17,379
|
Add missing comment quotes
|
{
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for letting this fall through the cracks, just merged it!"
] | 1,653
| 1,656
| 1,656
|
CONTRIBUTOR
| null |
# What does this PR do?
This minor fix adds missing quote marks round some explanatory comments in the "new model" tokenizer cookiecutter template.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
`blame` suggests @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17379/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17379",
"html_url": "https://github.com/huggingface/transformers/pull/17379",
"diff_url": "https://github.com/huggingface/transformers/pull/17379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17379.patch",
"merged_at": 1656497796000
}
|
https://api.github.com/repos/huggingface/transformers/issues/17378
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17378/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17378/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17378/events
|
https://github.com/huggingface/transformers/pull/17378
| 1,243,975,181
|
PR_kwDOCUB6oc44O6hc
| 17,378
|
TF: Correct XLA generation with GPT-2 and T5
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"For future reference: the most recent commit, where T5 was adapted to discard padded past values (similarly to the FLAX implementation), works correctly numerically. However, when compiling to XLA, we get the following `NotImplementedError`:\r\n\r\n<img width=\"1511\" alt=\"Screenshot 2022-05-21 at 14 17 40\" src=\"https://user-images.githubusercontent.com/12240844/169653445-25892c4b-3380-475b-ae24-861e840a9f05.png\">\r\n\r\n\r\nBecause of that, I'll try a new strategy: the model receives as input the sliced past, without padding.",
"_The documentation is not available anymore as the PR was closed or merged._",
"The previous commit had a different approach (feed to the model the unpadded past), but resulted in the exact same exception. Both are related to a dynamic-size slicing of the past.",
"superceded by https://github.com/huggingface/transformers/pull/17426, which grabbed the good changes from this PR. T5 needs further 🔍 "
] | 1,653
| 1,657
| 1,653
|
MEMBER
| null |
# What does this PR do?
(WIP)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17378/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17378",
"html_url": "https://github.com/huggingface/transformers/pull/17378",
"diff_url": "https://github.com/huggingface/transformers/pull/17378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17378.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17377
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17377/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17377/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17377/events
|
https://github.com/huggingface/transformers/pull/17377
| 1,243,901,441
|
PR_kwDOCUB6oc44OsqT
| 17,377
|
Fix the wrong sample-rate of random tokens
|
{
"login": "t-zhong",
"id": 68187072,
"node_id": "MDQ6VXNlcjY4MTg3MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/68187072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t-zhong",
"html_url": "https://github.com/t-zhong",
"followers_url": "https://api.github.com/users/t-zhong/followers",
"following_url": "https://api.github.com/users/t-zhong/following{/other_user}",
"gists_url": "https://api.github.com/users/t-zhong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t-zhong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t-zhong/subscriptions",
"organizations_url": "https://api.github.com/users/t-zhong/orgs",
"repos_url": "https://api.github.com/users/t-zhong/repos",
"events_url": "https://api.github.com/users/t-zhong/events{/privacy}",
"received_events_url": "https://api.github.com/users/t-zhong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17377). All of your documentation changes will be reflected on that endpoint.",
"Hello! Why is it wrong?",
"> Hello! Why is it wrong?\r\n\r\nSorry for late! we should replace 10% masked input tokens with random word, but the code means replacing 10% from the remaining tokens not replaced with [MASK] token. So, we only replace 0.2 * 0.1 tokens, it should be 0.2 * 0.5 ?!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,659
| 1,659
|
NONE
| null |
# What does this PR do?
Fix the wrong sample-rate of random tokens from `0.1` to `0.5` in the `DataCollatorForLanguageModeling` and `DataCollatorForWholeWordMask`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17377/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17377",
"html_url": "https://github.com/huggingface/transformers/pull/17377",
"diff_url": "https://github.com/huggingface/transformers/pull/17377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17377.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17376
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17376/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17376/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17376/events
|
https://github.com/huggingface/transformers/pull/17376
| 1,243,887,666
|
PR_kwDOCUB6oc44OqI-
| 17,376
|
Fix the wrong sample-rate of random tokens
|
{
"login": "t-zhong",
"id": 68187072,
"node_id": "MDQ6VXNlcjY4MTg3MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/68187072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t-zhong",
"html_url": "https://github.com/t-zhong",
"followers_url": "https://api.github.com/users/t-zhong/followers",
"following_url": "https://api.github.com/users/t-zhong/following{/other_user}",
"gists_url": "https://api.github.com/users/t-zhong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t-zhong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t-zhong/subscriptions",
"organizations_url": "https://api.github.com/users/t-zhong/orgs",
"repos_url": "https://api.github.com/users/t-zhong/repos",
"events_url": "https://api.github.com/users/t-zhong/events{/privacy}",
"received_events_url": "https://api.github.com/users/t-zhong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653
| 1,653
| 1,653
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17376/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17376",
"html_url": "https://github.com/huggingface/transformers/pull/17376",
"diff_url": "https://github.com/huggingface/transformers/pull/17376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17376.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17375
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17375/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17375/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17375/events
|
https://github.com/huggingface/transformers/issues/17375
| 1,243,813,461
|
I_kwDOCUB6oc5KIxZV
| 17,375
|
'lm_head.weight' is improperly not initialized when loading BART weights into BartForCausalLM
|
{
"login": "nbravulapalli",
"id": 87538360,
"node_id": "MDQ6VXNlcjg3NTM4MzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/87538360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbravulapalli",
"html_url": "https://github.com/nbravulapalli",
"followers_url": "https://api.github.com/users/nbravulapalli/followers",
"following_url": "https://api.github.com/users/nbravulapalli/following{/other_user}",
"gists_url": "https://api.github.com/users/nbravulapalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbravulapalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbravulapalli/subscriptions",
"organizations_url": "https://api.github.com/users/nbravulapalli/orgs",
"repos_url": "https://api.github.com/users/nbravulapalli/repos",
"events_url": "https://api.github.com/users/nbravulapalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbravulapalli/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @nbravulapalli,\r\n\r\nThanks for the report. The model is actually correctly initialized, the reason you see this error message is simply because these lines:\r\nhttps://github.com/huggingface/transformers/blob/56f50590d5a9ac881db9ee1753f4642cf3d33d28/src/transformers/models/bart/modeling_bart.py#L1275\r\nare preset for `BartForConditionalGeneration` but not for `BartForCausalLM`\r\nhttps://github.com/huggingface/transformers/blob/56f50590d5a9ac881db9ee1753f4642cf3d33d28/src/transformers/models/bart/modeling_bart.py#L1700\r\n\r\nThe model should however be correctly initialized. \r\n\r\nDo you mind opening a PR to fix it? :-)\r\n",
"Thank you for your reply @patrickvonplaten! I will take a shot at the PR, but I have two clarifying questions:\r\n\r\n1) If I understand correctly, the LM head is properly initialized for both BartForConditionalGeneration and BartForCausalLM, but with BartForConditionalGeneration the error message is suppressed?\r\n\r\nThis is a separate question:\r\n2) When I evaluate `bartMod.config.add_cross_attention` (a BartForCausalLM object) I get `False`. However, the model structure for `bartMod` includes\r\n`(encoder_attn): BartAttention(`\r\n`(k_proj): Linear(in_features=768, out_features=768, bias=True)`\r\n`(v_proj): Linear(in_features=768, out_features=768, bias=True)`\r\n`(q_proj): Linear(in_features=768, out_features=768, bias=True)`\r\n`(out_proj): Linear(in_features=768, out_features=768, bias=True))`\r\nwhich I assumed was the cross-attention layer designed for the decoder query matrix and the encoder key, value matrices.\r\n\r\na) Is this cross-attention layer not actually present in the CausalLM model, and this layer is improperly displayed? If so, does this mean that the CausalLM model doesn't actually work out of the box (since the built-in cross-attention layers are removed), and requires finetuning to be used for CausalLM?\r\n\r\nb) If this cross-attention layer is actually present in the CausalLM model (and this layer is properly displayed), then how is this cross-attention layer still working even without being able to receive the encoder key, value matrices at inference time?",
"1. Yes, note it's a warning message that is suppressed not a error message\r\n\r\na) Yes BartForCausalLM won't work out of the box exactly because the cross attention layers are removed\r\nb) It's not present in BartForCausalLM :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@patrickvonplaten
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Input: `bart = BartForCausalLM.from_pretrained('facebook/bart-base')`
Output: `Some weights of BartForCausalLM were not initialized from the model checkpoint at facebook/bart-base and are newly initialized: ['lm_head.weight']`
Input: `bart2 = BartForConditionalGeneration.from_pretrained('facebook/bart-base')`
Result: The LM Head for the encoder decoder model is properly initialized from the Bart-Base checkpoint
### Expected behavior
For both model configurations using `facebook/bart-base`, the LM Head layer has the same dimensions (`(lm_head): Linear(in_features=768, out_features=50265, bias=False)`). However, for BartForCausalLM, the LM head is randomly initialized, while for BartForConditionalGeneration, the LM is properly instantiated from the `facebook/bart-base` checkpoint.
Isn't this incorrect?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17375/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17374
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17374/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17374/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17374/events
|
https://github.com/huggingface/transformers/issues/17374
| 1,243,775,822
|
I_kwDOCUB6oc5KIoNO
| 17,374
|
fill-mask target for full words not enabled?
|
{
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"hi @i-am-neo ,\r\n\r\nFill-mask works at a token level, not words, so you cannot use targets which are multi token. Since `damnation` seems to no exist directly in your vocabulary, it uses the closes `1-token` element it finds `damn`. You cannot unfortunately have `fill-mask` work with varying number of holes/tokens. You could use 2 masks instead of one for instance, but then you will need to logic \"fuse\" those two tokens which might not correspond to a single word.",
"Thanks @Narsil . I had thought so. No plans to allow full words and regex in your roadmap?",
"It's not something that fits the current `pipeline` model (at least in the default settings).\r\n\r\n`pipeline` is aimed to make ML model usable without any ML specific knowledge, BUT never hiding any complexities it induces.\r\n\r\nIn this particular part, `fill-mask` model, do work on a token level, and trying to do `word-level` really requires some custom strategies (how many tokens is your word? Do you want to handle multiple size of tokens ?). How to resolve in case of multi tokens (since multi tokens will give you independent token probabilities, and not grouped probabilities). \r\n\r\nSince it is a non trivial problem, we decide to not do it on behalf of users and give an output that is much closer to what the original model does. If simple strategies can be implemented maybe we can add them as opt-in parameters, but so far nothing is being worked on as far as I know. PRs are more than welcome.\r\n\r\nIf you want more background for instance, this PR might be valuable to read (and the linked PRs too); https://github.com/huggingface/transformers/pull/10222\r\n\r\nI would like to point out `zero-shot-classification` which although not being the same pipeline we have seen being used in a similar fashion, which might suit your needs.\r\n\r\nside note: An easy start solution for regexp is to fetch all tokens in the vocabulary that start with your prefix and use them as targets `targets=[word for word in tokenizer.get_vocab() if word.startswith(\"X\")]` for instance. It's not all possible english words, but at least all possible elements of the vocabulary that will work.",
"I hear you @Narsil, it sure is non-trivial.\r\n\r\nIn my case, I would like a large-enough LM (for example, Roberta-large) to generate word candidates to start with, given some regex as hints/constraints, _without knowing in advance what the best candidates are, except for those hints_. My thinking is that the candidates the LM generates would more or less already fit into the context given to the model. Multiple candidates would be ranked post-fill by their scores.\r\n\r\nRe `zero-shot-classification`, the trouble is without knowing in advance what the correct/best candidates are, it's more difficult to work it in.",
"> In my case, I would like a large-enough LM (for example, Roberta-large) to generate word candidates to start with, given some regex as hints/constraints, without knowing in advance what the best candidates are, except for those hints. \r\n\r\nI think there would be a **lot of value** to be able to do that, but AFAIK there's no simple way to do that with bert-like models. I think the biggest culprit is that models are trained to give independant probabilities, and not joint ones. Solving it might require an entire new training objective.\r\n\r\n`This house is <mask> and <mask>`:\r\n\r\nDisjoint probabilities: (big: 50%, red: 50) (big: 50%, red: 50%)\r\nJoint probabilities: ( (big, red, 50%) , (red, big, 50%) ) . (Btu then (big, big = 0% for instance, which is allowed in disjoint probabilities)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,653
| 1,656
| 1,656
|
NONE
| null |
### System Info
```shell
- `transformers` version: 4.19.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@Narsil and @LysandreJik (?)
How can one use Roberta for fill-mask to get the **full** word candidate and its "full" score for Roberta-large? Open to workaround solutions.
My example:
`sentence = f"Nitzsch argues against the doctrine of the annihilation of the wicked, regards the teaching of Scripture about eternal {nlp.tokenizer.mask_token} as hypothetical."`
Notebook [here](https://colab.research.google.com/drive/12QrU5SC7kHsM0gekzjLXDJXptAkdSnuq?usp=sharing).
Using pipeline, the output I get is:
`The specified target token ` damnation` does not exist in the model vocabulary. Replacing with `Ġdamn`.`
Thanks.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See notebook above.
### Expected behavior
```shell
I expect to see "damnation" with its score.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17374/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/17373
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17373/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17373/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17373/events
|
https://github.com/huggingface/transformers/pull/17373
| 1,243,630,301
|
PR_kwDOCUB6oc44N0b2
| 17,373
|
[WIP] [deepspeed] from_pretrained deal with ignore_mismatched_sizes
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17373). All of your documentation changes will be reflected on that endpoint.",
"After creating a test I discovered it breaks on tied variables since they get ignored in `model.named_parameters` - so back to the drawing table."
] | 1,653
| 1,660
| null |
CONTRIBUTOR
| null |
An attempt to fix the issue reported https://github.com/huggingface/transformers/issues/17336
Fixes: https://github.com/huggingface/transformers/issues/17336
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17373/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/17373",
"html_url": "https://github.com/huggingface/transformers/pull/17373",
"diff_url": "https://github.com/huggingface/transformers/pull/17373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/17373.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/17372
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/17372/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/17372/comments
|
https://api.github.com/repos/huggingface/transformers/issues/17372/events
|
https://github.com/huggingface/transformers/issues/17372
| 1,243,588,151
|
I_kwDOCUB6oc5KH6Y3
| 17,372
|
Text2TextGeneration Pipeline : Batch size and num_return_sequences are not working together
|
{
"login": "ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ierezell",
"html_url": "https://github.com/ierezell",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"repos_url": "https://api.github.com/users/ierezell/repos",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Solved with huggingface 4.19.2. \r\n\r\nSorry for all the fuss. Maybe it will help someone someday. "
] | 1,653
| 1,653
| 1,653
|
CONTRIBUTOR
| null |
### System Info
```shell
- `transformers` version: 4.16.2
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
Hello @Narsil,
When using a pipeline I wanted to speed up the generation and thus use the `batch_size` parameter.
In a `Text2TextGenerationPipeline`, with `num_return_sequences` of 1 everything works fine and I have a x3 speedup when using a `batch_size` of 8 !
However, I would like to use `num_return_sequences > 1`. Setting this still leads to the same amount of output utterances of input (and not twice as much (2 per input utterance if `num_return_sequences` is =2 )) and after investigation, I realized that the Text2TextPipeline has a `__call__` method with `[res[0] for res in results]` so I decided to remove it (with a custom class) to be able to have the "num_return_sequences * input_len".
```python
class MultipleText2TextGenerationPipeline(Text2TextGenerationPipeline):
def __call__(self, *args, **kwargs) -> list[str]:
result: list[list[dict[Literal["generated_text"], str]]] = super(Text2TextGenerationPipeline, self).__call__(*args, **kwargs)
flatten_results: list[str] = []
for result_list in result:
for result_dict in result_list:
flatten_results.append(result_dict["generated_text"].replace("question: ", ""))
return flatten_results
```
When, using `batch_size` with `num_return_sequences > 1` lead to weird output like having 24 outputs with `batch_size = 8` and `num_return_sequences=3`.... When using `num_return_sequences=1` I have the good output meaning 60 sentences if I places 20 uttrances as input with `num_return_sequences=3` but not for `batch_size>1`
Thanks in advance for any help,
Have a great day.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from time import perf_counter
from torch import bfloat16
from transformers import Text2TextGenerationPipeline
from transformers.modeling_utils import PreTrainedModel
from transformers.models.auto.modeling_auto import AutoModelForSeq2SeqLM
from transformers.models.auto.tokenization_auto import AutoTokenizer
from transformers.tokenization_utils import PreTrainedTokenizer
class MultipleText2TextGenerationPipeline(Text2TextGenerationPipeline):
def __call__(self, *args, **kwargs) -> list[str]:
result: list[list[dict[Literal["generated_text"], str]]] = super(Text2TextGenerationPipeline, self).__call__(*args, **kwargs)
flatten_results: list[str] = []
for result_list in result:
for result_dict in result_list:
flatten_results.append(result_dict["generated_text"].replace("question: ", ""))
return flatten_results
tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap")
model: PreTrainedModel = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap")
pipeline = MultipleText2TextGenerationPipeline(model=model,tokenizer=tokenizer, device=0)
input_texts = [f"answer: {ans} context: I like to eat bananas in the morning" for ans in ['I', 'bananas', 'morning', 'yes', 'no']]
DEFAULT_GENERATOR_OPTIONS = {
"max_length": 128,
"min_length": 2,
"early_stopping": True,
"num_beams": 3,
"temperature": 1.0,
"top_k": 0,
"top_p": 0.92,
"repetition_penalty": 2.0,
"length_penalty": 1.0,
}
start=perf_counter()
print(len(input_texts))
print(f"expecting {len(input_texts)} got : ", end=" ")
print(len(pipeline(input_texts, **DEFAULT_GENERATOR_OPTIONS, num_return_sequences=1))) # 20
print(perf_counter()-start)
start=perf_counter()
print(len(input_texts))
print(f"expecting {len(input_texts)*3} got : ", end=" ")
print(len(pipeline(input_texts, **DEFAULT_GENERATOR_OPTIONS, num_return_sequences=3))) # 60
print(perf_counter()-start)
start=perf_counter()
print(len(input_texts))
print(f"expecting {len(input_texts)*3} got : ", end=" ")
print(len(pipeline(input_texts, **DEFAULT_GENERATOR_OPTIONS, num_return_sequences=3, batch_size=8))) # 24
print(perf_counter()-start)
```
### Expected behavior
```shell
If setting the number of return sequence of a list of string, return a list of len(list)*num_return_sequences
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/17372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/17372/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.