url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20985/events
|
https://github.com/huggingface/transformers/pull/20985
| 1,517,540,244
|
PR_kwDOCUB6oc5GjHWL
| 20,985
|
Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script
|
{
"login": "mpierrau",
"id": 56202367,
"node_id": "MDQ6VXNlcjU2MjAyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/56202367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpierrau",
"html_url": "https://github.com/mpierrau",
"followers_url": "https://api.github.com/users/mpierrau/followers",
"following_url": "https://api.github.com/users/mpierrau/following{/other_user}",
"gists_url": "https://api.github.com/users/mpierrau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpierrau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpierrau/subscriptions",
"organizations_url": "https://api.github.com/users/mpierrau/orgs",
"repos_url": "https://api.github.com/users/mpierrau/repos",
"events_url": "https://api.github.com/users/mpierrau/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpierrau/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sanchit-gandhi "
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
This PR relates to [PR 19997](https://github.com/huggingface/transformers/pull/19997), in which I messed up the PR by forgetting the --force flag when pushing. Hopefully this PR is correctly performed.
@sanchit-gandhi @sgugger @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20985/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20985",
"html_url": "https://github.com/huggingface/transformers/pull/20985",
"diff_url": "https://github.com/huggingface/transformers/pull/20985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20985.patch",
"merged_at": 1672935896000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20984/events
|
https://github.com/huggingface/transformers/pull/20984
| 1,517,502,176
|
PR_kwDOCUB6oc5Gi_FT
| 20,984
|
Ignore errors when deleting old checkpoints in trainer
|
{
"login": "akrogager",
"id": 98160708,
"node_id": "U_kgDOBdnQRA",
"avatar_url": "https://avatars.githubusercontent.com/u/98160708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akrogager",
"html_url": "https://github.com/akrogager",
"followers_url": "https://api.github.com/users/akrogager/followers",
"following_url": "https://api.github.com/users/akrogager/following{/other_user}",
"gists_url": "https://api.github.com/users/akrogager/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akrogager/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akrogager/subscriptions",
"organizations_url": "https://api.github.com/users/akrogager/orgs",
"repos_url": "https://api.github.com/users/akrogager/repos",
"events_url": "https://api.github.com/users/akrogager/events{/privacy}",
"received_events_url": "https://api.github.com/users/akrogager/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #17265
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20984/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20984",
"html_url": "https://github.com/huggingface/transformers/pull/20984",
"diff_url": "https://github.com/huggingface/transformers/pull/20984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20984.patch",
"merged_at": 1672758659000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20983/events
|
https://github.com/huggingface/transformers/pull/20983
| 1,517,432,118
|
PR_kwDOCUB6oc5GiwCY
| 20,983
|
Add DETA
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @alaradirik this PR is in a ready state, except for 2 things:\r\n\r\n- [x] whether or not we leverage torchvision's `batched_nms` => the CI is currently failing because this library is not installed. Will also ask for @sgugger and @LysandreJik's opinion here\r\n- [ ] the `post_process_object_detection` method might require an in-depth look",
"There is no problem with the model requiring torchvision to be installed. We have many models with specific dependencies, some of which you ported yourself ;-).\r\nJust protect the import between `if is_torchvision_available()` and have a the first line in the init of the models be a `require_backends([\"torchvision\"])`.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger I've addressed all comments, except for adding support for the custom kernel.\r\n\r\nCould we perhaps add support for the custom kernel for the 3 models (Mask2Former, OneFormer and DETA) in a separate PR?",
"In this case, remove the code trying to load the custom kernels in the modeling file and we can add it back in the PR that will deal with custom kernels.",
"@sgugger ok, feel free to approve :)",
"Failing test is unrelated/flaky, merging."
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds [DETA](https://github.com/jozhang97/DETA/issues/3). DETA is a slight change to Deformable DETR by using traditional IoU-based assignment as opposed to the Hungarian matching used in the original DETR, and incorporating NMS (non-maximum suppression) in the postprocessing.
Note: this model has a `torchvision` dependency for NMS.
To do:
- [x] transfer checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20983/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20983",
"html_url": "https://github.com/huggingface/transformers/pull/20983",
"diff_url": "https://github.com/huggingface/transformers/pull/20983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20983.patch",
"merged_at": 1675158191000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20982/events
|
https://github.com/huggingface/transformers/pull/20982
| 1,517,423,559
|
PR_kwDOCUB6oc5GiuM6
| 20,982
|
[WIP] Avoid Null CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20982). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
[WIP] Avoid Null CI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20982/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20982",
"html_url": "https://github.com/huggingface/transformers/pull/20982",
"diff_url": "https://github.com/huggingface/transformers/pull/20982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20982.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20981/events
|
https://github.com/huggingface/transformers/pull/20981
| 1,517,418,094
|
PR_kwDOCUB6oc5GitB5
| 20,981
|
Avoid CI runs under users' own CircleCI personal account
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20981). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh Unfortunately, I encountered the problem you mentioned last night and I don't know how to solve it. The link provided in your MR ( [https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization- ](https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization-) ) has expired. Can you tell me how to solve this problem.? This is my PR [#25519](https://github.com/huggingface/transformers/pull/25519 ) After I submitted it, the circle ci began to execute automatically, but this issue occurred afterwards. \r\n\r\n "
] | 1,672
| 1,692
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Sometimes the tests will run in the CircleCI of the user and not run our tests since they don't have access to our resources. One example is on [this PR](https://github.com/huggingface/transformers/pull/20479#issuecomment-1369690668) where the "real" tests were not run.
**We can make the new job `check_circleci_user` required (too) - once this PR is merged into `main`**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20981/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20981/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20981",
"html_url": "https://github.com/huggingface/transformers/pull/20981",
"diff_url": "https://github.com/huggingface/transformers/pull/20981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20981.patch",
"merged_at": 1672759178000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20980/events
|
https://github.com/huggingface/transformers/pull/20980
| 1,517,394,642
|
PR_kwDOCUB6oc5Gin_y
| 20,980
|
Improve OWL-ViT postprocessing
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
- Adds post_process_object_detection method to OWL-ViT with the same functionality as other object detection post-processing methods (thresholding, different target sizes for each image in the batch).
- Updates the zero-shot-object-detection pipeline to use the new method
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20980/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20980",
"html_url": "https://github.com/huggingface/transformers/pull/20980",
"diff_url": "https://github.com/huggingface/transformers/pull/20980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20980.patch",
"merged_at": 1672763109000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20979/events
|
https://github.com/huggingface/transformers/pull/20979
| 1,517,387,845
|
PR_kwDOCUB6oc5GimiE
| 20,979
|
Improve OWL-ViT postprocessing
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
- Adds post_process_object_detection method to OWL-ViT with the same functionality as other object detection post-processing methods (thresholding, different target sizes for each image in the batch).
- Updates the zero-shot-object-detection pipeline to use the new method
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20979/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20979",
"html_url": "https://github.com/huggingface/transformers/pull/20979",
"diff_url": "https://github.com/huggingface/transformers/pull/20979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20979.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20978/events
|
https://github.com/huggingface/transformers/pull/20978
| 1,517,378,227
|
PR_kwDOCUB6oc5GikYO
| 20,978
|
Adding Support for Mixed Precision in Accelerator
|
{
"login": "BiEchi",
"id": 60613238,
"node_id": "MDQ6VXNlcjYwNjEzMjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60613238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BiEchi",
"html_url": "https://github.com/BiEchi",
"followers_url": "https://api.github.com/users/BiEchi/followers",
"following_url": "https://api.github.com/users/BiEchi/following{/other_user}",
"gists_url": "https://api.github.com/users/BiEchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BiEchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BiEchi/subscriptions",
"organizations_url": "https://api.github.com/users/BiEchi/orgs",
"repos_url": "https://api.github.com/users/BiEchi/repos",
"events_url": "https://api.github.com/users/BiEchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/BiEchi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20978). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
There's a bug in the code that, we've got `accelerator.use_fp16` but the accelerator.use_fp16 flag can never be `True` because we didn't pass it in. I've added the support by passing in the fp16 flag.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20978/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20978",
"html_url": "https://github.com/huggingface/transformers/pull/20978",
"diff_url": "https://github.com/huggingface/transformers/pull/20978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20978.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20977/events
|
https://github.com/huggingface/transformers/pull/20977
| 1,517,255,543
|
PR_kwDOCUB6oc5GiJxj
| 20,977
|
Fix post_process_object_detection method descriptions
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the descriptions of all effected model methods (post_process_object_detection and deprecated post_process methods) that inaccurately state the methods returns bounding boxes in the format expected by the COCO API (x_center, y_center, w, h) format instead of the (x1, y1, x2, y2) format.
I will open a separate PR to add an option to return the bounding boxes in the COCO API format.
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20977/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20977",
"html_url": "https://github.com/huggingface/transformers/pull/20977",
"diff_url": "https://github.com/huggingface/transformers/pull/20977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20977.patch",
"merged_at": 1672750562000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20976/events
|
https://github.com/huggingface/transformers/pull/20976
| 1,517,195,385
|
PR_kwDOCUB6oc5Gh84F
| 20,976
|
Exclude the madeup words from M2M100Tokenizer.vocab_size
|
{
"login": "guillaumekln",
"id": 4805513,
"node_id": "MDQ6VXNlcjQ4MDU1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4805513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaumekln",
"html_url": "https://github.com/guillaumekln",
"followers_url": "https://api.github.com/users/guillaumekln/followers",
"following_url": "https://api.github.com/users/guillaumekln/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaumekln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaumekln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaumekln/subscriptions",
"organizations_url": "https://api.github.com/users/guillaumekln/orgs",
"repos_url": "https://api.github.com/users/guillaumekln/repos",
"events_url": "https://api.github.com/users/guillaumekln/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaumekln/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @ArthurZucker, do you have some time to review this PR? Thanks!"
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
The `<unk>` token has an incorrect ID in `M2M100Tokenizer.get_vocab`:
```python
>>> tokenizer = transformers.M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
>>> tokenizer.convert_tokens_to_ids("<unk>")
3
>>> tokenizer.get_vocab()["<unk>"]
128111
```
The reason is the vocabulary is defined like this:
```
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
```
but `_convert_id_to_token` converts the "madeup words" to `<unk>`.
We can fix this issue by excluding the "madeup words" from the vocabulary size, which is consistent with how other tokenizers work such as `NllbTokenizer`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20976/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20976",
"html_url": "https://github.com/huggingface/transformers/pull/20976",
"diff_url": "https://github.com/huggingface/transformers/pull/20976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20976.patch",
"merged_at": 1675865947000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20975/events
|
https://github.com/huggingface/transformers/pull/20975
| 1,517,087,916
|
PR_kwDOCUB6oc5GhlxH
| 20,975
|
Fix type casting in compute_segments
|
{
"login": "achsvg",
"id": 3223219,
"node_id": "MDQ6VXNlcjMyMjMyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3223219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/achsvg",
"html_url": "https://github.com/achsvg",
"followers_url": "https://api.github.com/users/achsvg/followers",
"following_url": "https://api.github.com/users/achsvg/following{/other_user}",
"gists_url": "https://api.github.com/users/achsvg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/achsvg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/achsvg/subscriptions",
"organizations_url": "https://api.github.com/users/achsvg/orgs",
"repos_url": "https://api.github.com/users/achsvg/repos",
"events_url": "https://api.github.com/users/achsvg/events{/privacy}",
"received_events_url": "https://api.github.com/users/achsvg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
# What does this PR do?
Fix bug in `feature_extraction_detr.py` compute_segments function: make sure the shape is integer (it could be float)
If the shape is float, there will be an error message:
```
zeros() received an invalid combination of arguments - got (tuple, device=torch.device, dtype=torch.dtype), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20975/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20975",
"html_url": "https://github.com/huggingface/transformers/pull/20975",
"diff_url": "https://github.com/huggingface/transformers/pull/20975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20975.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20974/events
|
https://github.com/huggingface/transformers/pull/20974
| 1,517,055,805
|
PR_kwDOCUB6oc5Ghe7k
| 20,974
|
Add perf numbers for perf_train_cpu
|
{
"login": "jianan-gu",
"id": 83276252,
"node_id": "MDQ6VXNlcjgzMjc2MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/83276252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianan-gu",
"html_url": "https://github.com/jianan-gu",
"followers_url": "https://api.github.com/users/jianan-gu/followers",
"following_url": "https://api.github.com/users/jianan-gu/following{/other_user}",
"gists_url": "https://api.github.com/users/jianan-gu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianan-gu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianan-gu/subscriptions",
"organizations_url": "https://api.github.com/users/jianan-gu/orgs",
"repos_url": "https://api.github.com/users/jianan-gu/repos",
"events_url": "https://api.github.com/users/jianan-gu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianan-gu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger Could you please review this PR? Thanks!",
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @sgugger, \r\nFor the absolute time number to be shown as public in the doc, can we directly mention and link this [public blog](https://huggingface.co/blog/intel-sapphire-rapids) for a reference in this perf_train_cpu doc instead of providing ours? (to simplify this work and avoid extra internal review procedures)\r\n\r\nThis public blog guides users to run through CPU training and provides realistic results, which is a very practical reference.\r\n\r\nThanks.\r\n",
"Yes, we can definitely link to the blog post instead of the picture.",
"> Yes, we can definitely link to the blog post instead of the picture.\r\nHi, @sgugger \r\nHave refined this PR to add the link to this blog post as a practice example in the doc.\r\nThanks!",
"Thanks again for your contribution!"
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As mentioned in https://github.com/huggingface/transformers/pull/17138, we are adding some perf numbers in the doc.
cc @sywangyi @liangan1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20974/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20974",
"html_url": "https://github.com/huggingface/transformers/pull/20974",
"diff_url": "https://github.com/huggingface/transformers/pull/20974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20974.patch",
"merged_at": 1675693243000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20973/events
|
https://github.com/huggingface/transformers/issues/20973
| 1,517,043,249
|
I_kwDOCUB6oc5abD4x
| 20,973
|
Pipeline to support batch inference
|
{
"login": "maiiabocharova",
"id": 71256026,
"node_id": "MDQ6VXNlcjcxMjU2MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/71256026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maiiabocharova",
"html_url": "https://github.com/maiiabocharova",
"followers_url": "https://api.github.com/users/maiiabocharova/followers",
"following_url": "https://api.github.com/users/maiiabocharova/following{/other_user}",
"gists_url": "https://api.github.com/users/maiiabocharova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maiiabocharova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maiiabocharova/subscriptions",
"organizations_url": "https://api.github.com/users/maiiabocharova/orgs",
"repos_url": "https://api.github.com/users/maiiabocharova/repos",
"events_url": "https://api.github.com/users/maiiabocharova/events{/privacy}",
"received_events_url": "https://api.github.com/users/maiiabocharova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil ",
"Hi @maiiabocharova doesn't this work already out of the box? \r\n\r\n```python\r\nimport torch\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\r\n \"ner\",\r\n device=0 if torch.cuda.is_available() else -1,\r\n aggregation_strategy=\"average\",\r\n batch_size=16,\r\n)\r\n\r\n\r\noriginal_fn = pipe.model.forward\r\nCOUNT = 0\r\n\r\n\r\ndef new_forward(*args, **kwargs):\r\n global COUNT\r\n COUNT += 1\r\n return original_fn(*args, **kwargs)\r\n\r\n\r\npipe.model.forward = new_forward\r\n\r\n\r\ndef data():\r\n for i in range(20):\r\n yield \"I live in New york\"\r\n\r\n\r\nfor out in pipe(data()):\r\n print(out)\r\n\r\nprint(f\"Forward called {COUNT} times\")\r\n```\r\n\r\nThis works, no ?",
"Sorry, probably I was looking into wrong source code\r\n```python\r\nfor i, sentence in enumerate(_inputs):\r\n\r\n # Manage correct placement of the tensors\r\n with self.device_placement():\r\n\r\n tokens = self.tokenizer(\r\n sentence,\r\n return_attention_mask=False,\r\n return_tensors=self.framework,\r\n truncation=True,\r\n return_special_tokens_mask=True,\r\n return_offsets_mapping=self.tokenizer.is_fast,\r\n )\r\n if self.tokenizer.is_fast:\r\n offset_mapping = tokens.pop(\"offset_mapping\").cpu().numpy()[0]\r\n elif offset_mappings:\r\n offset_mapping = offset_mappings[i]\r\n else:\r\n offset_mapping = None\r\n\r\n special_tokens_mask = tokens.pop(\"special_tokens_mask\").cpu().numpy()[0]\r\n```\r\n\r\nBut actually when I modified this part into \r\n```python\r\nfor start_index in range(0, len(sentences), batch_size):\r\n sentences_batch = sentences[start_index:start_index+batch_size]\r\n with self.device_placement():\r\n\r\n tokens = self.tokenizer(\r\n sentences_batch,\r\n return_attention_mask=False,\r\n return_tensors=self.framework,\r\n truncation=True,\r\n padding='longest',\r\n return_special_tokens_mask=True,\r\n return_offsets_mapping=self.tokenizer.is_fast,\r\n )\r\n if self.tokenizer.is_fast:\r\n offset_mapping_batch = tokens.pop(\"offset_mapping\").cpu().numpy()\r\n special_tokens_mask_batch = tokens.pop(\"special_tokens_mask\").cpu().numpy()\r\n with torch.no_grad():\r\n tokens = self.ensure_tensor_on_device(**tokens)\r\n entities_batch = self.model(**tokens)[0].cpu().numpy()\r\n input_ids_batch = tokens[\"input_ids\"].cpu().numpy()\r\n scores_batch = np.exp(entities_batch) / np.exp(entities_batch).sum(-1, keepdims=True)\r\n```\r\nPipeline started working 3x faster\r\n\r\nP.S. Yes, you are right! I am sorry, maybe I was using also the old version of the library. \r\nSorry once again!",
"Maybe an older version indeed. \r\n\r\nAlso the batching mecanism is not really transparent in the pipeline code, it's meant to be relatively orthogonal (because making it explicit had too many drawbacks, like code duplication, and it was really hard to support more complex use cases)."
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### Feature request
Thank you for the awesome framework!
For my work I wanted to use `transformers.pipelines.token_classification.TokenClassificationPipeline` in batch mode, since it is much faster on GPU, but I wanted to keep all the functionality for grouping entities.
So I want to suggest something like this:
```
nlp = pipeline("ner", model=model,
tokenizer=tokenizer,
device = 0 if torch.cuda.is_available() else -1,
aggregation_strategy="average", batch_size=16)
```
### Motivation
I implemented it for myself and think it would be cool to have this functionality "out-of-the-box" for community to enjoy the speed up. (And it really gives a huge speed up)
### Your contribution
I am willing to contribute and implement this change for TokenClassification task (also for TextClassification, FeatureExtraction should be pretty much same). Have not worked with other pipelines, so not sure how batching is implemented there, but I am willing to try and contribute.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20973/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20972/events
|
https://github.com/huggingface/transformers/issues/20972
| 1,516,971,279
|
I_kwDOCUB6oc5aayUP
| 20,972
|
Some issues on summarization example
|
{
"login": "Shentao-YANG",
"id": 22757892,
"node_id": "MDQ6VXNlcjIyNzU3ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/22757892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shentao-YANG",
"html_url": "https://github.com/Shentao-YANG",
"followers_url": "https://api.github.com/users/Shentao-YANG/followers",
"following_url": "https://api.github.com/users/Shentao-YANG/following{/other_user}",
"gists_url": "https://api.github.com/users/Shentao-YANG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shentao-YANG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shentao-YANG/subscriptions",
"organizations_url": "https://api.github.com/users/Shentao-YANG/orgs",
"repos_url": "https://api.github.com/users/Shentao-YANG/repos",
"events_url": "https://api.github.com/users/Shentao-YANG/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shentao-YANG/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
@patil-suraj
Thanks for the beautiful example in `main/examples/pytorch/summarization`. I do, however, have the following issues with the `run_summarization_no_trainer.py` file.
1. The flag `--max_length` seems not used.
2. The checking `check_min_version("4.26.0.dev0")` seems ahead of the current release of `4.25.1`
3. This file does not have the flags no `test_file`, `max_train/eval/predict_samples`, while `run_summarization.py` has.
4. Also it will be helpful to add the functionality of gradient norm clipping.
Thanks for your help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20972/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20972/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20971/events
|
https://github.com/huggingface/transformers/pull/20971
| 1,516,966,171
|
PR_kwDOCUB6oc5GhL2L
| 20,971
|
[run_clm example] add torch_dtype option for model load.
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix @jiqing-feng @sgugger please help to review",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger done, add other torch dtypes per review comment"
] | 1,672
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
for BLOOM 175B model. peak memory will reduce about 350G for inference. the weight of BLOOM in model hub is bfloat16
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
reduce the peak memory for BLOOM inference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20971/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20971",
"html_url": "https://github.com/huggingface/transformers/pull/20971",
"diff_url": "https://github.com/huggingface/transformers/pull/20971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20971.patch",
"merged_at": 1672756392000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20970/events
|
https://github.com/huggingface/transformers/pull/20970
| 1,516,911,794
|
PR_kwDOCUB6oc5GhAYg
| 20,970
|
Make the attention_head_size in distilbert an object attribute
|
{
"login": "KarlFelixJoehnk",
"id": 49342884,
"node_id": "MDQ6VXNlcjQ5MzQyODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/49342884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KarlFelixJoehnk",
"html_url": "https://github.com/KarlFelixJoehnk",
"followers_url": "https://api.github.com/users/KarlFelixJoehnk/followers",
"following_url": "https://api.github.com/users/KarlFelixJoehnk/following{/other_user}",
"gists_url": "https://api.github.com/users/KarlFelixJoehnk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KarlFelixJoehnk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KarlFelixJoehnk/subscriptions",
"organizations_url": "https://api.github.com/users/KarlFelixJoehnk/orgs",
"repos_url": "https://api.github.com/users/KarlFelixJoehnk/repos",
"events_url": "https://api.github.com/users/KarlFelixJoehnk/events{/privacy}",
"received_events_url": "https://api.github.com/users/KarlFelixJoehnk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your PR. Could you just run `make style` on your branch to fix the quality issue?\r\n\r\nHi @sgugger, thanks for the quick approval. Just fixed the code style",
"Thanks again for your contribution!"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
It simply moves the attention_head_size in the distilbert model to be an object attribute. This is necessary if you want to use the Distilbert model in the nn_pruning library. It will also benefit anyone who ever needs to access the attention_head_size attribute from an instance of a Distilbert model. This change is consistent with other transformer models in this library (see BERT https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L253 or BART https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L157)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20970/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20970",
"html_url": "https://github.com/huggingface/transformers/pull/20970",
"diff_url": "https://github.com/huggingface/transformers/pull/20970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20970.patch",
"merged_at": 1673284636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20969/events
|
https://github.com/huggingface/transformers/pull/20969
| 1,516,742,748
|
PR_kwDOCUB6oc5Ggdxu
| 20,969
|
Support turning off the model uploading in ClearML
|
{
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger :) I modified the docstring, as you suggested. Can you please have a look? 🙏",
"Code looks good but it seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"@sgugger I re-authenticated to CircleCI, and it seems that the CI passed :)\r\nCan you have a look and approve? :)",
"@sgugger Thanks for the feedback. I accepted your change :)",
"Thanks for your contribution!"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Support turning off the model uploading in ClearML using an environment variable called `CLEARML_LOG_MODEL`
Fixes #20889
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20969/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20969",
"html_url": "https://github.com/huggingface/transformers/pull/20969",
"diff_url": "https://github.com/huggingface/transformers/pull/20969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20969.patch",
"merged_at": 1673007740000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20968/events
|
https://github.com/huggingface/transformers/pull/20968
| 1,516,441,607
|
PR_kwDOCUB6oc5GfdVx
| 20,968
|
Graphormer model for Graph Classification
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger Sorry for not having put my PR in draft and thank you so much for your comments! \r\n\r\nI'll take them into account, keep working on this, and ping you back when I have cleaner code?",
"No worries at all, and yes!",
"@sgugger I think the code is better now, if you want to take a look.\r\n\r\nI have several questions:\r\n- ~should I add more documentation?~\r\n- how can I express the fact that this model does not use a tokenizer, but a collator, so that it appears in the doc ?\r\n- I'm having trouble with the common test suites: for the inputs, for example, we embed nodes and edges of graphs using two different embedding layers > what should \"get_input_embeddings\" return then? A concatenation of both? Similarly, `input_ids` has no equivalent in our case, as our inputs are both `input_nodes` and `input_edges`, so I'm a bit stuck on what to do here \r\n\r\nEdit: talked to @LysandreJik, will edit tests (though they'll be model specific atm) and doc will stay minimal for now",
"@clefourrier\r\n\r\nIt seems there is an issue with your CircleCI permissions, the tests are not run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n",
"Just did, nothing changed, and I don't have the permission to manually trigger the pipeline on the [CI webpage](https://app.circleci.com/pipelines/github/huggingface/transformers?branch=pull%2F20968).\r\n\r\nDo you have other ideas of things I could try? :hugs:",
"> Just did, nothing changed\r\n\r\nyou can do\r\n```bash\r\ngit commit --allow-empty -m \"Empty commit to trigger CI\"\r\ngit push\r\n```\r\n\r\n> I don't have the permission to manually trigger the pipeline\r\n\r\nThis usually means you opened that job run page without login.",
"Cool, a commit seems to have solved it! :pray: @ydshieh !\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh All tests for my model are passing - thank you for your pointers! \r\n",
"Thank you so much for all the time spent reviewing @sgugger @ydshieh ! :pray: \r\nI hope this time it's finally up to par ^^\r\n\r\nQuick question: once merged, how long till it is in a release of transformers? I have a blog post on Graph Classification with this model, when should I plan on publishing it/communicating about it?",
"There should be a release of Transformers next week by the way.",
"@clefourrier \r\n\r\nThere are a few tests failed on daily CI. See [here](https://github.com/huggingface/transformers/actions/runs/3964181931/jobs/6792864795).\r\n\r\nI can also help, but I have one question: Does `GraphormerModel` require all the following inputs to be specified?\r\n\r\n```python\r\n def forward(\r\n ...\r\n input_nodes,\r\n input_edges,\r\n attn_bias,\r\n in_degree,\r\n out_degree,\r\n spatial_pos,\r\n attn_edge_type,\r\n```",
"Hi @clefourrier!\r\n\r\nWhen you get some time, could you take a look on the following failed tests:\r\n```\r\ntests/models/graphormer/test_modeling_graphormer.py::GraphormerModelTest::test_model_from_pretrained\r\ntests/models/graphormer/test_modeling_graphormer.py::GraphormerModelIntegrationTest::test_inference_graph_classification\r\n```\r\nwhich seems related to the missing or wrong checkpoint link/path.\r\n\r\nRegarding the other 3 failed tests (`GraphormerModelTest::test_torchscript_xxx`), I can work on them, but I need a bit of context regarding the inputs for this model 🙏 , see my comment above. Thank you :-)",
"@ydshieh Hi! Back from vacations! :wave: \r\n\r\n- Edit: The checkpoint is now here:https://huggingface.co/clefourrier/graphormer-base-pcqm4mv2 (The problem likely came from the wrong dash type, changed the path)\r\n- The model needs as inputs all the inputs you mentioned, which are generated during data preprocessing. If you need more information on the model, I wrote a blog post (still a PR atm https://github.com/huggingface/blog/pull/781) which describes inputs and use and feel free to ask any questions which could help!",
"@ydshieh Opened a new PR #21367 to manage the checkpoint path problems. "
] | 1,672
| 1,675
| 1,674
|
MEMBER
| null |
# What does this PR do?
Adds the Graphormer model for graph classification in Transformers.
Done:
- [x] Architecture ported
- [x] Collator (the model has no tokenizer) and preprocessing
- [x] Test results against original implementation, to make sure they are within precision range. Edit: exactly same results :fire:
- [x] Add checkpoints and make sure they load properly
- [x] Update test suite
- [x] Add model card for the checkpoints (https://huggingface.co/clefourrier/pcqm4mv2_graphormer_base, https://huggingface.co/clefourrier/pcqm4mv1_graphormer_base)
- [x] Update doc
## Dependencies
Cython - this could be ported to Python, but preprocessing will be considerably slower, as well as collation if preprocessing is done on the fly.
Linked to #20962
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Discussed with Thom on Slack)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20968/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20968",
"html_url": "https://github.com/huggingface/transformers/pull/20968",
"diff_url": "https://github.com/huggingface/transformers/pull/20968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20968.patch",
"merged_at": 1674151560000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20967/events
|
https://github.com/huggingface/transformers/pull/20967
| 1,516,435,775
|
PR_kwDOCUB6oc5GfcCt
| 20,967
|
Fix past CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
I tried to launch Past CI (the 2nd round) after #20861, but there are some more fixes required: Past CI images don't install other dependencies, and we need more decorators to skip some tests if they are not installed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20967/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20967",
"html_url": "https://github.com/huggingface/transformers/pull/20967",
"diff_url": "https://github.com/huggingface/transformers/pull/20967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20967.patch",
"merged_at": 1673543062000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20966/events
|
https://github.com/huggingface/transformers/pull/20966
| 1,516,306,361
|
PR_kwDOCUB6oc5Ge_4Y
| 20,966
|
TF: serializable hubert
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,684
| 1,673
|
MEMBER
| null |
# What does this PR do?
Fixes #20954 -- some handling for dynamic shapes was missing on Wave2Vec/Hubert
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20966/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20966",
"html_url": "https://github.com/huggingface/transformers/pull/20966",
"diff_url": "https://github.com/huggingface/transformers/pull/20966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20966.patch",
"merged_at": 1673960858000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20964/events
|
https://github.com/huggingface/transformers/pull/20964
| 1,516,250,947
|
PR_kwDOCUB6oc5Gez1v
| 20,964
|
Generate: delete unused TF `_reorder_cache`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,684
| 1,672
|
MEMBER
| null |
# What does this PR do?
Starts off the year with my favorite task: deleting unused code 🥳 The deleted private function (`_reorder_cache`) is no longer used due to the removal on non-XLA-compatible generate functions (#20927)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20964/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20964",
"html_url": "https://github.com/huggingface/transformers/pull/20964",
"diff_url": "https://github.com/huggingface/transformers/pull/20964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20964.patch",
"merged_at": 1672743297000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20963/events
|
https://github.com/huggingface/transformers/issues/20963
| 1,516,237,743
|
I_kwDOCUB6oc5aX_Ov
| 20,963
|
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "7565A",
"id": 98552486,
"node_id": "U_kgDOBd_Kpg",
"avatar_url": "https://avatars.githubusercontent.com/u/98552486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7565A",
"html_url": "https://github.com/7565A",
"followers_url": "https://api.github.com/users/7565A/followers",
"following_url": "https://api.github.com/users/7565A/following{/other_user}",
"gists_url": "https://api.github.com/users/7565A/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7565A/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7565A/subscriptions",
"organizations_url": "https://api.github.com/users/7565A/orgs",
"repos_url": "https://api.github.com/users/7565A/repos",
"events_url": "https://api.github.com/users/7565A/events{/privacy}",
"received_events_url": "https://api.github.com/users/7565A/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20963/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20963/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20962/events
|
https://github.com/huggingface/transformers/issues/20962
| 1,516,214,173
|
I_kwDOCUB6oc5aX5ed
| 20,962
|
Graphormer
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi, @clefourrier \r\ncan I contribute to this? also open to any other issue \r\nI have previously contributed to PyTorch (350+ lines of C++, Objective C, and Python)",
"Hi @Raman-Kumar, I'd be delighted to have some help! \r\n\r\nI'll try to wrap up what I have and clean up a bit by Friday, and either I'll need your help on this to finish up, or it will be good and we can work together on integrating TokenGT, another graph transformer (for which I have draft code), or if you are feeling very confident, you can integrate another graph transformer model of your choice, wdyt?",
"😊 Replying a little late, I was going through other issues and reading some PRs.\r\n@clefourrier \r\nI would like to work on integrating TokenGT\r\n\r\nand am also ready to offer any other help if you have a delegable workload.\r\nMeanwhile, I am reading more blogs and your work.",
"Solved by the merge of Graphormer"
] | 1,672
| 1,675
| 1,675
|
MEMBER
| null |
### Model description
Graph Transformer model developed by Microsoft.
https://graphormer.readthedocs.io/en/latest/
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Full implementation: https://github.com/microsoft/Graphormer/
Weights:
- pcqm4mv1_graphormer_base: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_best_pcqm4mv1.pt
- pcqm4mv2_graphormer_base: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_best_pcqm4mv2.pt
- pcqm4mv1_graphormer_base_for_molhiv: https://ml2md.blob.core.windows.net/graphormer-ckpts/checkpoint_base_preln_pcqm4mv1_for_hiv.pt
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20962/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20961/events
|
https://github.com/huggingface/transformers/issues/20961
| 1,516,130,480
|
I_kwDOCUB6oc5aXlCw
| 20,961
|
Add Transformer-Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
|
{
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hey @jp1924! Thanks for opening this new model request. The Transformer-Transducer is indeed an exciting architecture in speech recognition - one that achieves both strong performance and low latency.\r\n\r\nMy hesitations in adding this model stem from the fact that the weights are not open-sourced. It's very rare that we add a model to transformers when the weights are not open-sourced; having no weights means that the model can only be used with randomly initialised parameters, which is not much good for downstream ASR! \r\n\r\nModels with randomly initialised weights require extensive training with generous GPU/TPU compute to produce trained checkpoints. In many cases, it is very difficult to reproduce the exact results from the paper due to differences in data, training set-up and compute resources. \r\n\r\nOn the other hand, pre-trained models (weights) are extremely valuable to the community as they can be used directly without training (or with a small amount of fine-tuning) for downstream inference tasks. Consequently, we tend to focus our efforts in transformers on adding models where pre-trained weights are available.\r\n\r\nThis is not as to discourage you from contributing a Transformer-Transducer model to transformers. Such a contribution would be most welcome! However, taking into account the above points, I would advise that you focus the implementation on a Transformer-Transducer codebase where strong pre-trained weights are available and open-sourced. I'm more than happy to help find a suitable codebase + weights to port! This would be a valuable addition to the transformers library.",
"Thanks to project like hivemind and other community members like @fxtentacle it is possible to organize the computing capacity.\r\n\r\nIf Hajo is still interested in that use case we could try to pretrain an German model, what do you think?",
"Thank you advise! @sanchit-gandhi!\r\nIn addition to implementing the code, I will find a way to upload weight!\r\n\r\n*The contents below have nothing to do with the exact implementation of the model!*\r\nActually, as you said, it takes a lot of resources to train Transformer-Transducers. it's need to run 730 epoch when i set it to the hyper-parameter described in the paper. \r\n\r\nThe problem with this is that, in a way, we have to train Encoders from scratch So what I'm thinking experimentally is that I'm thinking about changing Audio and Lable Encoder to a PreTraining model(like Wav2Vec2 or BERT).\r\n\r\n---\r\n\r\nYou can always do that if model can help with the project you! @flozi00!\r\n\r\nBut this model hasn't been validated yet. I don't know when you start Pretrain, but I need to stabilize the algorithm of generate or tokenizer, so can you wait a little bit? I'm making this at the same time as the company's work, so I think it'll take some time!\r\n\r\nAnd I have a question about using German data to proceed with training.\r\n1. What dataset are you going to use?\r\n2. Is it possible to verify the model when training the model using the data?\r\n3. Do you have a comparator to use for verification?\r\n\r\nThere's this much question. I'd appreciate an answer!\r\n\r\n*From here on, it's about verification! You don't have to read it if you don't need it.*\r\n\r\nThis is an empirical story. When I measured the performance of my native language data, KsponSpeech, using Test-Clean, the performance of Wav2Vec2 was around 20%(WER), and RNN-T was around 30%. I think the range of performance will be 5-10% if German is also taught.",
"> So what I'm thinking experimentally is that I'm thinking about changing Audio and Lable Encoder to a PreTraining model(like Wav2Vec2 or BERT)\r\n\r\nThis is a good idea! The only constraint then is that our encoder network must take the same architecture as Wav2Vec2 in order for all of the pre-trained weights to be compatible with the network.\r\n\r\nSince Wav2Vec2 is a different architecture to the Transformer network used in the Transformer-Transducer model, we'll likely only be able to load a _subset_ of the pre-trained weights into the T-T model this way.",
"Thank you for answering even though it's an experimental idea! @sanchit-gandhi\r\n\r\nThere's something I didn't understand while reading. I didn't understand the \"*subset*\" well, but the structures of Wav2Vec2 and T-T models are different, so do you want to bring only the encoder part of the pre-trained Wav2Vec2?",
"Hey @jp1924 - we'll only be able to load the weights of the Wav2Vec2 model into the Transformer-Transducer **if** the T-T has the same encoder architecture as Wav2Vec2. If it doesn't then this won't be possible (or we'll only be able to load the weights that do match, which still leaves some randomly initialised)",
"I'd like to reiterate that adding a T-T model to Transformers would be amazing and think it's great you're excited by this too! \r\n\r\nWe should be selective though in adding a model where weights are already available, preferably 'official' ones as it's very hard to emulate these strong pre-trained checkpoints without the data/compute.\r\n\r\nIf this isn't the case, it's very difficult to justify adding the model to transformers (the torch model is not much use without the trained params to go with it!)",
"Sorry for the late reply! @sanchit-gandhi \r\nI'll experiment right away, I think it'll be possible if I just modify the Encoder part of the Transformer Transducer Model!\r\n\r\nBut there's one thing I'm worried about. It's the CNN layer of wav2vec2, and the Streaming Model will have at least 25ms of voice. But I don't know how the CNN class will react to this. Maybe we need more experiments on this.\r\n\r\n---\r\n*This is the next best way to try if the above method doesn't work. You don't have to read it.*\r\nThe second best way is to pretrain AudioEncoder using gumble-softmax.\r\n\r\nIn fact, the difference between Wav2Vec2 and T-T's Audio Encoder is the difference in how raw-audio is put into the Transformer Encoder. Wav2Vec2 compresses the voice using CNN, and T-T converts audio to Mel, compresses the voice through windowing, and puts it in the encoder layer.\r\n\r\nThen my idea is to convert audio into a windowed mel to pretrain the AudioEncoder of T-T.\r\n\r\nIt's just that I don't like it either. Because the way to pretrain the T-T model is obviously going to take a lot of resources and time... if possible, this is the last way I want to use it.....",
"Hey @jp1924, my feelings are that pre-training the model are going to be difficult from two perspectives:\r\n1. Hacky architecture: we won't be able to pre-train the correct T-T architecture, but some modified Wav2Vec2-T version\r\n2. Pre-training is expensive, both in terms of time and compute\r\n\r\nAlso, I'd like to re-iterate that it's very unlikely that we'd add such a model to Transformers - we can only really add 'official' implementations (i.e. the 'official' code and the 'official' weights), see https://github.com/huggingface/transformers/issues/20961#issuecomment-1382245091.\r\n\r\nMy recommendation would be to find an official T-T implementation where both the code and weights are available and see whether we can add this to Transformers!\r\n\r\nFeel free to post any findings here - we can discuss them and pick the right one for the T-T integration!\r\n\r\nRe-iterating my excitement for the T-T integration! We need to find 'official' code + checkpoints before we commit to integrating",
"Thank you for leaving a comment @sanchit-gandhi\r\n\r\nThe code and weight of the T-T model have not been officially released... and even the code of the T-T model that the users made personally has no weight. The code is not an formula, but is it possible to use that code to learn the model and upload it to the hub? Of course, when I heard that the model I learned had similar performance as the paper.",
"@flozi00 I personally probably have to pass on releasing an open-source T-T model due to a non-compete covering a closed-source T-T which I built. That said, the last time I talked to them, the University of Lübeck still had a few NVIDIA DGX available for research projects. The main requirement for such research GPU use is to write a 2-3 page paper about what worked and what didn't afterwards, so it's not a very big hurdle.\r\n\r\n@sanchit-gandhi In my experience, a T-T can be trained quite cheaply with transfer learning. For the label encoder, you force it to produce the same logits as a pre-trained T5 (out of which there are plenty on HF). For the acoustic encoder, you force it to imitate the logits from a pre-trained wav2vec2. You can even pre-compute the label and acoustic logit I/O pairs as a temporary dataset. Because you're now training the T-T components against fixed I/O pairs, as opposed to doing alignment while training, they will converge really quickly, like a few days on a A100 each. For the join/merge network, you can pre-generate forced alignment data (e.g. from wav2vec2) and then train against those. ",
"@fxtentacle \r\nHi!, In my experience, pre-trained wav2vec is full-attention model. so, I think that is not useful on T-T\r\n\r\nWhen I printed output 1) 10sec and 2) 1sec in 10sec,\r\nI compared 1)'s vector for 1sec and 2)'s vector, that is diffrent value each other.\r\n\r\nso, I think, i talk 'i'm so hun' and 'i'm so hungry'\r\n'hun' sound's acoustic vector is not verfied!\r\n\r\nmaybe, did you want to say about pre-trained wav2vec2 model on trained streaming-like dataset?",
"@YooSungHyun when you have a dataset of audio, you can use a pre-trained wav2vec2 to generate logits for every timestep. Normally, you would then resolve those logits into text using the language model, but instead you can also just save them as a new dataset. So then you have the raw audio one the one side and the time-aligned logits from wav2vec2 on the other side. And that data can be used to train the acoustic encoder of a T-T. You feed a chunk of the raw audio into the encoder and then use the difference to your \"known good\" logits from wav2vec2 as the loss signal. Doing so removes the uncertainty w.r.t. the time alignment, because you already know where in time each logit was emitted by wav2vec2. And that greatly speeds up training the acoustic encoder, because you can use an absolute error loss instead of using a CTC loss. And that produces a much cleaner gradient to learn from.",
"Thank you for your idea! @fxtentacle I tested it based on your idea!\r\n\r\nI understand what @YooSungHyun said\r\n```\r\nwhen the full voice \"i'm so hungry\" was input. \r\nin streaming case, the corresponding voice of \"l'm\" -> \"so\" -> \"hugry\" is come in order, \r\nbut Wav2Vec2 has a difference in the value of the vector \r\nwhen a full voice like \"i'm so hungry\" is received and when \"l'm\" -> \"so\" -> \"hugry\" is partially received.\r\n```\r\n \r\nSo the solution to this problem is\r\n```\r\nIf there is a difference, let's make the split_vector similar or equal to each part of the full_vector through \r\nthe loss calculation between the full_vector from \"i'm so hungry\" and the split_vector from \r\nsplit_audio (e.g., when separated per second).\r\n```\r\n \r\nSo based on the above understanding, I made the code below, but there was a problem.\r\n```\r\nfrom transformers import Wav2Vec2Model, Wav2Vec2Config\r\nimport torch\r\n\r\n\r\ndef main() -> None:\r\n model_name = r\"patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist\"\r\n cache_dir = r\"\"\r\n\r\n config = Wav2Vec2Config.from_pretrained(\r\n model_name,\r\n cache_dir=cache_dir,\r\n apply_spec_augment=False,\r\n )\r\n model = Wav2Vec2Model.from_pretrained(model_name, cache_dir=cache_dir, config=config)\r\n\r\n sampling_rate = 16000\r\n batch_size = 2\r\n audio_size = [254080, 101600, 293600, 82880]\r\n # sec = 15.88, 6.35, 18.35, 5.18\r\n\r\n dummy_datas = [torch.rand((batch_size, audio_len)) for audio_len in audio_size]\r\n\r\n for full_audio in dummy_datas:\r\n outputs = model(full_audio)\r\n labels = outputs[0]\r\n\r\n input_values = torch.zeros(labels.size())\r\n full_size = full_audio.size(1)\r\n stack_size = 0\r\n check_list = list() # it's for test\r\n\r\n # [NOTE]: Cut the voice in 1 seconds.\r\n # If a 15.88 second voice is cut per second, 16 split_audios are generated.\r\n for idx, split_idx in enumerate(range(0, full_size, sampling_rate), start=1):\r\n split_audio = full_audio[:, split_idx : (split_idx + sampling_rate)]\r\n\r\n outputs = model(split_audio)\r\n hidden_states = outputs[0]\r\n check_list.append(hidden_states)\r\n hidden_size = hidden_states.shape[1]\r\n\r\n input_values[:, stack_size : stack_size + hidden_size] = hidden_states\r\n stack_size += hidden_size\r\n\r\n state_size = sum([state.shape[1] for state in check_list])\r\n print(\"\\n---------- result ----------\")\r\n print(f\"audio_length: {full_audio.shape[1] / sampling_rate}\")\r\n print(f\"labels_length: {labels.shape[1]}\")\r\n print(f\"actual_length: {state_size}\")\r\n print(f\"differece: {labels.shape[1] - state_size}\")\r\n print(f\"repeat_num: {idx}\")\r\n\r\n\r\nif \"__main__\" in __name__:\r\n main()\r\n```\r\nFor example, if you put full_audio with size n into Wav2Vec2, you get labels with length 7 of vector.\r\n \r\nThen, cut the full_audio with the size of n, get four split_audio per second, put it in wav2vec2 to get split_vector, and add the values to get input_values. \r\n\r\nMy opinion is that the length between labels and input_values should be the same when audio is processed in the above way. However, there is a difference in length when I turn the code above. \r\n\r\nThe picture below is a brief description of the problem. \r\n\r\n\r\nWhen you actually rewind the code above, a difference of 15 occurs when you extract labels and input_values from a voice of 15.88 seconds. The reason why the difference is 15 instead of 16 is because if length - 1 is applied to all the audio input, even the actual label would have been length -1, so the difference would be 15 instead of 16. \r\n\r\nThe serious point of this problem is that the difference in length between input_values and labels increases in proportion to the length of the voice. \r\n\r\nWhen I looked up the cause, I think the length-1 problem occurs while going through the Wav2Vec2FeatureEncoder (CNN). \r\n\r\nThe solution I think is to add 0 pad to split_vector and make 0 pad xavier, kaming initialize, etc., but I'm worried because it's not a fundamental solution. \r\n\r\nIs there any way to fundamentally solve the problem other than attaching a pad? \r\n\r\n"
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### Model description
paper: [Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss](https://arxiv.org/abs/2002.02562)
- Transformer-Transducer is a an End2End-based ASR streaming model that converts spoken speech into text in real time.
- model that implements RNN-T as Transformer and train using RNN-T loss.
- It consists of Label Encoder in charge of text, Audio Encoder in charge of voice, and Joint Network that combines the calculations of each Encoder
- And in order not to exceed the max_length of the Transformer, the audio is converted into log-Mel Spectrogram, and then each Mel is stacked to match the voice within the max_length
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
jp1924: [Transformer-Transducer](https://github.com/jp1924/transformer-transducer)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20961/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20960/events
|
https://github.com/huggingface/transformers/issues/20960
| 1,516,067,170
|
I_kwDOCUB6oc5aXVli
| 20,960
|
There should partial forward in pretrained BERT and RoBERTa models
|
{
"login": "shashwat1002",
"id": 33834636,
"node_id": "MDQ6VXNlcjMzODM0NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33834636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashwat1002",
"html_url": "https://github.com/shashwat1002",
"followers_url": "https://api.github.com/users/shashwat1002/followers",
"following_url": "https://api.github.com/users/shashwat1002/following{/other_user}",
"gists_url": "https://api.github.com/users/shashwat1002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashwat1002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashwat1002/subscriptions",
"organizations_url": "https://api.github.com/users/shashwat1002/orgs",
"repos_url": "https://api.github.com/users/shashwat1002/repos",
"events_url": "https://api.github.com/users/shashwat1002/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashwat1002/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Each of those models is defined in its own modeling file that you can modify to suit your needs. This is why we have a one file per model policy in Transformers :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### Feature request
There should be a way to to send inputs to specific encoder layers and finish the forward partially.
For example, there should be a way to send input hidden states to the fourth encoder layer, and get all hidden states after that through forward computation.
### Motivation
This kind of thing is required in causal intervention.
### Your contribution
I can submit a PR to this effect.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20960/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20959/events
|
https://github.com/huggingface/transformers/pull/20959
| 1,515,517,453
|
PR_kwDOCUB6oc5GcQ3l
| 20,959
|
auxiliary_loss works for Deformable Detr
|
{
"login": "long8v",
"id": 46675408,
"node_id": "MDQ6VXNlcjQ2Njc1NDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/46675408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/long8v",
"html_url": "https://github.com/long8v",
"followers_url": "https://api.github.com/users/long8v/followers",
"following_url": "https://api.github.com/users/long8v/following{/other_user}",
"gists_url": "https://api.github.com/users/long8v/gists{/gist_id}",
"starred_url": "https://api.github.com/users/long8v/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/long8v/subscriptions",
"organizations_url": "https://api.github.com/users/long8v/orgs",
"repos_url": "https://api.github.com/users/long8v/repos",
"events_url": "https://api.github.com/users/long8v/events{/privacy}",
"received_events_url": "https://api.github.com/users/long8v/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
DeformableDetr does not work when auxiliary_loss=True.
Since Deformable Detr has list of class_embed, bbox_embed, this code will raise NotImplementedError.
```python
intermediate = outputs.intermediate_hidden_states if return_dict else outputs[4]
outputs_class = self.class_embed(intermediate)
outputs_coord = self.bbox_embed(intermediate).sigmoid()
```
```python
outputs_class = self.class_embed(intermediate)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
raise NotImplementedError
NotImplementedError
```
To fix this, we can simply use predefined `outputs_class` and `outputs_coord` in this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1942-L1943).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20959/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20959",
"html_url": "https://github.com/huggingface/transformers/pull/20959",
"diff_url": "https://github.com/huggingface/transformers/pull/20959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20959.patch",
"merged_at": 1672840868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20958/events
|
https://github.com/huggingface/transformers/pull/20958
| 1,515,508,895
|
PR_kwDOCUB6oc5GcO6L
| 20,958
|
Fix valid ratio for Deformable Detr
|
{
"login": "long8v",
"id": 46675408,
"node_id": "MDQ6VXNlcjQ2Njc1NDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/46675408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/long8v",
"html_url": "https://github.com/long8v",
"followers_url": "https://api.github.com/users/long8v/followers",
"following_url": "https://api.github.com/users/long8v/following{/other_user}",
"gists_url": "https://api.github.com/users/long8v/gists{/gist_id}",
"starred_url": "https://api.github.com/users/long8v/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/long8v/subscriptions",
"organizations_url": "https://api.github.com/users/long8v/orgs",
"repos_url": "https://api.github.com/users/long8v/repos",
"events_url": "https://api.github.com/users/long8v/events{/privacy}",
"received_events_url": "https://api.github.com/users/long8v/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @NielsRogge and @amyeroberts "
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
I encountered unexpected behavior that single-batch image and multi-batch image return significantly different output.
I found that its reason is from function `get_valid_ratio`, which returns ratio of image size for each example from batch (which is padded for longest width and height in batch). Since mask has opposite value(True for real pixel, False for padded) from original repo, mask should be opposite with [original repo](https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/deformable_transformer.py#L117-L124) inside of function `get_valid_ratio`. Otherwise it will return image ratio for **pad width and height**, which is obviously erroneous.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20958/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20958",
"html_url": "https://github.com/huggingface/transformers/pull/20958",
"diff_url": "https://github.com/huggingface/transformers/pull/20958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20958.patch",
"merged_at": 1672757006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20957/events
|
https://github.com/huggingface/transformers/pull/20957
| 1,515,381,313
|
PR_kwDOCUB6oc5Gbxr2
| 20,957
|
Fix T5 docstring
|
{
"login": "IvanLauLinTiong",
"id": 23013350,
"node_id": "MDQ6VXNlcjIzMDEzMzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/23013350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanLauLinTiong",
"html_url": "https://github.com/IvanLauLinTiong",
"followers_url": "https://api.github.com/users/IvanLauLinTiong/followers",
"following_url": "https://api.github.com/users/IvanLauLinTiong/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanLauLinTiong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanLauLinTiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanLauLinTiong/subscriptions",
"organizations_url": "https://api.github.com/users/IvanLauLinTiong/orgs",
"repos_url": "https://api.github.com/users/IvanLauLinTiong/repos",
"events_url": "https://api.github.com/users/IvanLauLinTiong/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanLauLinTiong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix docstring for `T5Stack` 's `deparallelize` method :
`PARALLELIZE_DOCSTRING` -> `DEPARALLELIZE_DOCSTRING`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20957/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20957",
"html_url": "https://github.com/huggingface/transformers/pull/20957",
"diff_url": "https://github.com/huggingface/transformers/pull/20957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20957.patch",
"merged_at": 1672743213000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20956/events
|
https://github.com/huggingface/transformers/pull/20956
| 1,515,104,014
|
PR_kwDOCUB6oc5Ga31J
| 20,956
|
[docs] improve issue template for i18n
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20956). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
The initial issue template (https://github.com/huggingface/transformers/pull/20199) includes minor typos and a closed PR link.
It makes it seem that the index page of a new language is already translated when it isn't.
Also some comment regarding what `langCode` or `langName` is would be helpful.
[x] Fixed typos.
[x] Removed irrelevant PR link.
[x] Explained what `langCode` or `langName` is more easily. (Replaced it rather)
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/20955
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
May you please review this PR, @sgugger?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20956/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20956",
"html_url": "https://github.com/huggingface/transformers/pull/20956",
"diff_url": "https://github.com/huggingface/transformers/pull/20956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20956.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20955/events
|
https://github.com/huggingface/transformers/issues/20955
| 1,515,103,074
|
I_kwDOCUB6oc5aTqNi
| 20,955
|
Typos and minor changes needed for i18n issue template
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
The initial issue template (https://github.com/huggingface/transformers/pull/20199) includes minor typos and a closed PR link.
It makes it seem that the index page of a new language is already translated when it isn't.
Also some comment regarding what `langCode` or `langName` is would be helpful.
[ ] Fix typos.
[ ] Remove irrelevant PR link.
[ ] Explain what `langCode` or `langName` is more easily.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20955/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20954/events
|
https://github.com/huggingface/transformers/issues/20954
| 1,514,949,116
|
I_kwDOCUB6oc5aTEn8
| 20,954
|
Can't Save TFHubertForCTC as Saved_model
|
{
"login": "ahmedlone127",
"id": 66001253,
"node_id": "MDQ6VXNlcjY2MDAxMjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66001253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedlone127",
"html_url": "https://github.com/ahmedlone127",
"followers_url": "https://api.github.com/users/ahmedlone127/followers",
"following_url": "https://api.github.com/users/ahmedlone127/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedlone127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedlone127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedlone127/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedlone127/orgs",
"repos_url": "https://api.github.com/users/ahmedlone127/repos",
"events_url": "https://api.github.com/users/ahmedlone127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedlone127/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @ahmedlone127 👋 I was able to reproduce locally on the latest version, will look into its causes!",
"@ahmedlone127: #20966 should fix it after it is merged 🤗\r\n\r\nTwo notes:\r\n1 - You would have to install `transformers` from git (`pip install https://github.com/huggingface/transformers`)\r\n2 - Your example script has some warnings after including the fix. There is a chance that the loaded model does not have the functionality you wish, you'd have to try it out :)"
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I am running on colab
### Who can help?
@Rocketknight1 @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import Wav2Vec2Processor, TFHubertForCTC
model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
model.save("test")
```
```
Downloading: 100%
1.38k/1.38k [00:00<00:00, 53.4kB/s]
Downloading: 100%
1.26G/1.26G [00:32<00:00, 72.2MB/s]
TFHubertForCTC has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU
All model checkpoint layers were used when initializing TFHubertForCTC.
All the layers of TFHubertForCTC were initialized from the model checkpoint at facebook/hubert-large-ls960-ft.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFHubertForCTC for predictions without further training.
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-2-d87cdca07c35>](https://localhost:8080/#) in <module>
2
3 model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
----> 4 model.save("test")
4 frames
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
[/usr/lib/python3.8/contextlib.py](https://localhost:8080/#) in __exit__(self, type, value, traceback)
118 if type is None:
119 try:
--> 120 next(self.gen)
121 except StopIteration:
122 return False
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1260 mask_time_indices = kwargs.get("mask_time_indices", None)
1261 if inputs["training"]:
-> 1262 hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)
1263
1264 encoder_outputs = self.encoder(
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in _mask_hidden_states(self, hidden_states, mask_time_indices)
1191 elif self.config.mask_time_prob > 0:
1192 # generate indices & apply SpecAugment along time axis
-> 1193 mask_time_indices = _compute_mask_indices(
1194 (batch_size, sequence_length),
1195 mask_prob=self.config.mask_time_prob,
[/usr/local/lib/python3.8/dist-packages/transformers/models/hubert/modeling_tf_hubert.py](https://localhost:8080/#) in _compute_mask_indices(shape, mask_prob, mask_length, min_masks)
222 raise ValueError("`mask_length` has to be bigger than 0.")
223
--> 224 if mask_length > sequence_length:
225 raise ValueError(
226 f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and"
OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
### Expected behavior
This code should save the model as a tensorflow Saved_Model, this code works for version 4.22.2. Also by chanigng the value of sequence_length to some random value such as 100 in the soure code , it started working.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20954/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20953/events
|
https://github.com/huggingface/transformers/issues/20953
| 1,514,858,201
|
I_kwDOCUB6oc5aSubZ
| 20,953
|
Add whisper converter for hf -> openai
|
{
"login": "faroit",
"id": 72940,
"node_id": "MDQ6VXNlcjcyOTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/72940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faroit",
"html_url": "https://github.com/faroit",
"followers_url": "https://api.github.com/users/faroit/followers",
"following_url": "https://api.github.com/users/faroit/following{/other_user}",
"gists_url": "https://api.github.com/users/faroit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faroit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faroit/subscriptions",
"organizations_url": "https://api.github.com/users/faroit/orgs",
"repos_url": "https://api.github.com/users/faroit/repos",
"events_url": "https://api.github.com/users/faroit/events{/privacy}",
"received_events_url": "https://api.github.com/users/faroit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"👋 @ArthurZucker, who created the conversion script\r\n ",
"Hey! I am not entirely sure if we should have this in our repo or just link it in the readme, but I think a contributor already wrote a script so pinging @bayartsogt-ya here.",
"For visibility, whisper.cpp provides conversion from HF models into their format.\r\nIt's much high performing that OpenAI implementation. See: https://github.com/ggerganov/whisper.cpp/tree/master/models",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I've seen a lot of converter codes flying around. Also given that tranformers is catching up with respect to timestamp outputs there is not much need to use whisper itself for inference anymore",
"@faroit could you add a link to a conversion script? Just so if people end up here, they find what they are looking for."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### Feature request
The inference pipeline in openai whisper has a couple of heuristics that aren't all covered in https://github.com/huggingface/transformers. Therefore, some users would like to fine-tune in huggingface and convert the model back to its original configuration.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/convert_openai_to_hf.py provides a script to convert from openai to hf, so we should also have a script to go the other way.
### Motivation
It is difficult to get the same transcription quality in the current transformers library compared to the openai transcribe function https://github.com/openai/whisper/blob/main/whisper/transcribe.py.
### Your contribution
there is an existing approach in https://github.com/luigisaetta/whisper-app/blob/main/match_layers.py by @luigisaetta
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20953/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20953/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20952/events
|
https://github.com/huggingface/transformers/pull/20952
| 1,514,760,002
|
PR_kwDOCUB6oc5GZuR8
| 20,952
|
Add generate kwargs to `AutomaticSpeechRecognitionPipeline`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for this modificaiton. @sgugger for final review"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil 👋,
This is the new PR of https://github.com/huggingface/transformers/pull/20935, as the commit history in the old one is messed up
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20952/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20952",
"html_url": "https://github.com/huggingface/transformers/pull/20952",
"diff_url": "https://github.com/huggingface/transformers/pull/20952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20952.patch",
"merged_at": 1672467219000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20951/events
|
https://github.com/huggingface/transformers/pull/20951
| 1,514,731,540
|
PR_kwDOCUB6oc5GZn0v
| 20,951
|
[trainer: `distributed_concat`] ensure `all_gather`'s inputs are contiguous
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
This PR fixes https://github.com/huggingface/transformers/issues/20942 where a user's code results in a non-contiguous tensor being passed to `all_gather` which fails with:
```
Traceback (most recent call last):
File "contiguous.py", line 83, in <module>
preds = torch.tensor(trainer.predict(eval_dataset)[0])
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 2894, in predict
output = eval_loop(
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3024, in evaluation_loop
logits = self._nested_gather(logits)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py", line 3140, in _nested_gather
tensors = distributed_concat(tensors)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 191, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py", line 194, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2275, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```
the fix adds `.contiguous()` which will do nothing if the tensor is already contiguous and will make it contiguous if it is not.
Fixes: https://github.com/huggingface/transformers/issues/20942
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20951/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20951",
"html_url": "https://github.com/huggingface/transformers/pull/20951",
"diff_url": "https://github.com/huggingface/transformers/pull/20951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20951.patch",
"merged_at": 1672466112000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20950/events
|
https://github.com/huggingface/transformers/issues/20950
| 1,514,713,135
|
I_kwDOCUB6oc5aSLAv
| 20,950
|
Mask dimension expansion might be wrong
|
{
"login": "pphuangyi",
"id": 22546248,
"node_id": "MDQ6VXNlcjIyNTQ2MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/22546248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pphuangyi",
"html_url": "https://github.com/pphuangyi",
"followers_url": "https://api.github.com/users/pphuangyi/followers",
"following_url": "https://api.github.com/users/pphuangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/pphuangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pphuangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pphuangyi/subscriptions",
"organizations_url": "https://api.github.com/users/pphuangyi/orgs",
"repos_url": "https://api.github.com/users/pphuangyi/repos",
"events_url": "https://api.github.com/users/pphuangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pphuangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hey! Do you happen to have found an error or any discrepancy ? I am not very familiar with this model but if you have a reproduction script or something to help me work on this, would be appreciated! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
https://github.com/huggingface/transformers/blob/17292440c069118fbdb992b9a17da2098fab5b87/src/transformers/models/reformer/modeling_reformer.py#L845
I feel that the way mask is expanded here might be wrong. More specifically, I feel that mask need to be repeated by the number hashes and splited into chunks before running gather.
Please look into it and let me know if I misunderstood. Thank you so much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20950/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20949/events
|
https://github.com/huggingface/transformers/pull/20949
| 1,514,708,206
|
PR_kwDOCUB6oc5GZikm
| 20,949
|
Remove T5 dependency from mT5 model
|
{
"login": "SD-13",
"id": 89520981,
"node_id": "MDQ6VXNlcjg5NTIwOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/89520981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SD-13",
"html_url": "https://github.com/SD-13",
"followers_url": "https://api.github.com/users/SD-13/followers",
"following_url": "https://api.github.com/users/SD-13/following{/other_user}",
"gists_url": "https://api.github.com/users/SD-13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SD-13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SD-13/subscriptions",
"organizations_url": "https://api.github.com/users/SD-13/orgs",
"repos_url": "https://api.github.com/users/SD-13/repos",
"events_url": "https://api.github.com/users/SD-13/events{/privacy}",
"received_events_url": "https://api.github.com/users/SD-13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger, I have made your suggested changes, please take a look. Thanks,",
"Thanks! You'll just need to add `MT5Stack` in [this list](https://github.com/huggingface/transformers/blob/56397471b454e8707b7865cfba0130f04a889592/utils/check_repo.py#L37) (along with T5Stack) to make all checks pass.",
"Failure is flaky so merging. Thanks for the work on this!"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes part of #19303
This PR removes T5 dependency from the mT5 model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20949/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20949",
"html_url": "https://github.com/huggingface/transformers/pull/20949",
"diff_url": "https://github.com/huggingface/transformers/pull/20949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20949.patch",
"merged_at": 1672858314000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20948/events
|
https://github.com/huggingface/transformers/pull/20948
| 1,514,596,454
|
PR_kwDOCUB6oc5GZKN8
| 20,948
|
🌐 [i18n-KO] Translated `installation.mdx` to Korean
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,681
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `installation.mdx` file of the documentation to Korean.
Thank you in advance for your review.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @ArthurZucker, @eunseojo may you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20948/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20948",
"html_url": "https://github.com/huggingface/transformers/pull/20948",
"diff_url": "https://github.com/huggingface/transformers/pull/20948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20948.patch",
"merged_at": 1674032724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20947/events
|
https://github.com/huggingface/transformers/issues/20947
| 1,514,497,757
|
I_kwDOCUB6oc5aRWbd
| 20,947
|
`tf2` BERT checkpoint to `pytorch_model.bin` (with MLM head)
|
{
"login": "Iulian277",
"id": 31247431,
"node_id": "MDQ6VXNlcjMxMjQ3NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/31247431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iulian277",
"html_url": "https://github.com/Iulian277",
"followers_url": "https://api.github.com/users/Iulian277/followers",
"following_url": "https://api.github.com/users/Iulian277/following{/other_user}",
"gists_url": "https://api.github.com/users/Iulian277/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iulian277/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iulian277/subscriptions",
"organizations_url": "https://api.github.com/users/Iulian277/orgs",
"repos_url": "https://api.github.com/users/Iulian277/repos",
"events_url": "https://api.github.com/users/Iulian277/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iulian277/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.\r\n\r\nNote that we do not provide conversion scripts to convert checkpoints obtained with other libraries, the ones exposed are to convert the checkpoints from their original implementation to the Hugging Face format. If you train a Hugging Face Tensorflow model, you'll then seamlessly be able to convert it to PyTorch/Flax."
] | 1,672
| 1,676
| 1,672
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to convert a `tf2` BERT checkpoint to the `pytorch_model.bin` format in order to upload it on the Huggingface hub. I know that there are 2 scripts for doing this type of conversion (one for [tf1](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py) checkpoints and one for [tf2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py) checkpoints).
The problem is that my BERT checkpoint is in `tf2` format and I want to convert it to a pytorch model (encoder + heads).
I mention that I've followed this [guide](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md#pre-train-a-bert-from-scratch) and used the [official ](https://github.com/tensorflow/models/tree/master/official/nlp) tensorflow scripts for obtaining the checkpoints.
### Expected behavior
I would like to export a `tf2` BERT checkpoint to a `pytorch` model, exporting the MLM head alongside the encoder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20947/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20946/events
|
https://github.com/huggingface/transformers/pull/20946
| 1,514,482,423
|
PR_kwDOCUB6oc5GYx4O
| 20,946
|
[i18n-KO] Translated quicktour page to Korean
|
{
"login": "wonhyeongseo",
"id": 29195190,
"node_id": "MDQ6VXNlcjI5MTk1MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wonhyeongseo",
"html_url": "https://github.com/wonhyeongseo",
"followers_url": "https://api.github.com/users/wonhyeongseo/followers",
"following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}",
"gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions",
"organizations_url": "https://api.github.com/users/wonhyeongseo/orgs",
"repos_url": "https://api.github.com/users/wonhyeongseo/repos",
"events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wonhyeongseo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much for your review @sgugger and especially @ArthurZucker for your awesome proof-read! I have commited your suggestion. Although it is a bit late, happy lunar new year everyone! 🐇 🌕 ",
"Happy new year! 🐰 "
] | 1,672
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `Quicktour.mdx` file of the documentation to Korean.
Thank you in advance for your review.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @ArthurZucker, @eunseojo may you please review this PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20946/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20946",
"html_url": "https://github.com/huggingface/transformers/pull/20946",
"diff_url": "https://github.com/huggingface/transformers/pull/20946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20946.patch",
"merged_at": 1674738603000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20945/events
|
https://github.com/huggingface/transformers/pull/20945
| 1,514,317,248
|
PR_kwDOCUB6oc5GYOg9
| 20,945
|
Fixing DistilBert error message
|
{
"login": "samuelzxu",
"id": 14795989,
"node_id": "MDQ6VXNlcjE0Nzk1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14795989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelzxu",
"html_url": "https://github.com/samuelzxu",
"followers_url": "https://api.github.com/users/samuelzxu/followers",
"following_url": "https://api.github.com/users/samuelzxu/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelzxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelzxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelzxu/subscriptions",
"organizations_url": "https://api.github.com/users/samuelzxu/orgs",
"repos_url": "https://api.github.com/users/samuelzxu/repos",
"events_url": "https://api.github.com/users/samuelzxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelzxu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Distilbert fix from [this thread](https://github.com/huggingface/transformers/pull/20933/commits/6f0282dd13d646b0f58d99a0c19377646efc2d55#r1059242097)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20945/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20945",
"html_url": "https://github.com/huggingface/transformers/pull/20945",
"diff_url": "https://github.com/huggingface/transformers/pull/20945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20945.patch",
"merged_at": 1672389850000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20944/events
|
https://github.com/huggingface/transformers/pull/20944
| 1,514,307,418
|
PR_kwDOCUB6oc5GYMab
| 20,944
|
Replace `past` with `past_key_values`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok the failing tests were because I did not pull from main, were the `tf_utils` now uses the `generate_config`. LGTM the failing test seems to be unrelated",
"Yes, good to merge for me!"
] | 1,672
| 1,673
| 1,673
|
COLLABORATOR
| null |
# What does this PR do?
The argument `past` was completely replaced with `past_key_values` thus this PR should fix any problem with `kwargs` being swallowed for old models in generation.
Related to #20347
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20944/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20944",
"html_url": "https://github.com/huggingface/transformers/pull/20944",
"diff_url": "https://github.com/huggingface/transformers/pull/20944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20944.patch",
"merged_at": 1673169701000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20943/events
|
https://github.com/huggingface/transformers/issues/20943
| 1,514,211,644
|
I_kwDOCUB6oc5aQQk8
| 20,943
|
Incorrect type for TrainerArgs#report_to
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I would like to politely question the logic of auto-closing issues. Do you not want GitHub issues to be a register of things that should be fixed? Auto-closing just means this bug will live on to annoy another user at some point in the future, and maybe they'll report it too."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### System Info
A minor thing...
The type of the `TrainerArgs` `report_to` argument is defined here: https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L901 as `Optional[List[str]]`, but the [docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L430) describes `Optional[str | List[str]]`. The default is wrong too.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
NA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20943/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20942/events
|
https://github.com/huggingface/transformers/issues/20942
| 1,514,100,078
|
I_kwDOCUB6oc5aP1Vu
| 20,942
|
`RuntimeError: tensors must be contiguous` when predicting GPTJForClassification trainer
|
{
"login": "Dahoas",
"id": 36314634,
"node_id": "MDQ6VXNlcjM2MzE0NjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/36314634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dahoas",
"html_url": "https://github.com/Dahoas",
"followers_url": "https://api.github.com/users/Dahoas/followers",
"following_url": "https://api.github.com/users/Dahoas/following{/other_user}",
"gists_url": "https://api.github.com/users/Dahoas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dahoas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dahoas/subscriptions",
"organizations_url": "https://api.github.com/users/Dahoas/orgs",
"repos_url": "https://api.github.com/users/Dahoas/repos",
"events_url": "https://api.github.com/users/Dahoas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dahoas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Trying to run your example, @Dahoas, I get:\r\n```\r\nTraceback (most recent call last):\r\n File \"contiguous.py\", line 8, in <module>\r\n from rm_datasets import PairwiseDataset, PairwiseEvalDataset, pairwise_data_collator\r\nModuleNotFoundError: No module named 'rm_datasets'\r\n```\r\n\r\nfixed that by removing:\r\n\r\n```\r\nfrom rm_datasets import PairwiseDataset, PairwiseEvalDataset, pairwise_data_collator\r\n```\r\nand now it fails on:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"contiguous.py\", line 10, in <module>\r\n from utils import freeze_bottom_causal_layers, load_yaml, make_rm\r\nModuleNotFoundError: No module named 'utils'\r\n```",
"Sorry about that, I forgot to remove some extraneous imports. I've done so now. ",
"Thank you for the corrections, @Dahoas - I'm able to reproduce the issue. Thank you for that.\r\n\r\nWill follow up again once I get a chance to investigate the issue.",
"The missing from report full traceback is:\r\n\r\n```\r\n***** Running Prediction *****\r\n Num examples = 10206\r\n Batch size = 1\r\nTraceback (most recent call last):\r\n File \"contiguous.py\", line 83, in <module>\r\n preds = torch.tensor(trainer.predict(eval_dataset)[0])\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 2894, in predict\r\n output = eval_loop(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 3024, in evaluation_loop\r\n logits = self._nested_gather(logits)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer.py\", line 3140, in _nested_gather\r\n tensors = distributed_concat(tensors)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in distributed_concat\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in <genexpr>\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in distributed_concat\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in <genexpr>\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in distributed_concat\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 191, in <genexpr>\r\n return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master-2/src/transformers/trainer_pt_utils.py\", line 194, in distributed_concat\r\n dist.all_gather(output_tensors, tensor)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 2275, in all_gather\r\n work = default_pg.allgather([tensor_list], [tensor])\r\nRuntimeError: Tensors must be contiguous\r\n```",
"@Dahoas, this should fix the problem: https://github.com/huggingface/transformers/pull/20951\r\n\r\nThank you for making it super easy for us to identify the problem!",
"Excellent thank you very much!"
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.21.2
- Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 1
- Using distributed or parallel set-up in script?: huggingface transformers deepspeed
### Who can help?
@sgugger @stas00
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import os
import torch
from torch.utils.data import Dataset, random_split
from transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel, AutoModelForSequenceClassification
import json
import deepspeed
import argparse
from datasets import load_dataset
import wandb
from tqdm import tqdm
class PairwiseEvalDataset(Dataset):
def __init__(self, pairs, tokenizer, max_length):
self.input_ids = []
self.attn_masks = []
for pair in tqdm(pairs):
prompt = pair["prompt"]
chosen, rejected = pair["chosen"], pair["rejected"]
tok_chosen = tokenizer(prompt + chosen + "<|endoftext|>", return_tensors="pt")["input_ids"]
tok_rejected = tokenizer(prompt + rejected + "<|endoftext|>", return_tensors="pt")["input_ids"]
# Reject data with num tokens > max_length
if tok_chosen.shape[-1] <= max_length and tok_rejected.shape[-1] <= max_length:
chosen_encodings_dict = tokenizer(prompt + chosen + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
rejected_encodings_dict = tokenizer(prompt + rejected + '<|endoftext|>', truncation=True,
max_length=max_length, padding="max_length", return_tensors="pt")
# First append chosen then rejected
self.input_ids.append(chosen_encodings_dict['input_ids'])
self.attn_masks.append(chosen_encodings_dict['attention_mask'])
self.input_ids.append(rejected_encodings_dict['input_ids'])
self.attn_masks.append(rejected_encodings_dict['attention_mask'])
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.attn_masks[idx]
def pairwise_data_collator(data):
if len(data[0]) == 4:
return {'input_ids': torch.cat([f[0] for f in data] + [f[2] for f in data]),
'attention_mask': torch.cat([f[1] for f in data] + [f[3] for f in data])}
elif len(data[0]) == 2:
return {'input_ids': torch.cat([f[0] for f in data]),
'attention_mask': torch.cat([f[1] for f in data])}
else:
raise ValueError("Invalid data format")
class PairwiseTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
# forward pass
PAD_ID = model.PAD_ID
assert len(inputs["input_ids"].shape) == 2
bs = inputs["input_ids"].shape[0] // 2
chosen = inputs["input_ids"][:bs]
rejected = inputs["input_ids"][bs:]
rewards = model(**inputs).logits
chosen_rewards = rewards[:bs]
rejected_rewards = rewards[bs:]
loss = -torch.log(torch.sigmoid(chosen_rewards - rejected_rewards)).mean()
return (loss, outputs) if return_outputs else loss
def make_rm(model_name):
config = AutoConfig.from_pretrained(model_name)
config.num_labels = 1
reward_model = AutoModelForSequenceClassification.from_config(config)
return reward_model
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer.pad_token = tokenizer.eos_token
model = make_rm("Dahoas/gptj-sft-static")
data = load_dataset("Dahoas/rm-static")
max_length = 1024
eval_dataset = PairwiseEvalDataset(data["test"], tokenizer, max_length=max_length)
train_args = TrainingArguments(output_dir=".", per_device_eval_batch_size=1)
trainer = PairwiseTrainer(model=model, args=train_args, train_dataset=eval_dataset, data_collator=pairwise_data_collator)
# TODO(dahoas): Unsure how to compute metrics in trainer for non-classification task
preds = torch.tensor(trainer.predict(eval_dataset)[0])
```
with ds_config
```yaml
{
"train_batch_size": "auto",
"fp16": {
"enabled": "auto",
"min_loss_scale": 1,
"loss_scale_window": 1000,
"hysteresis": 2,
"initial_scale_power": 32
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "none"
},
"offload_optimizer": {
"device": "none"
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"contiguous_gradients": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-08
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": "auto",
"warmup_num_steps": 100
}
}
}
```
Launch with `deepspeed --num_gpus 1 test.py --deepspeed ../configs/ds_configs/ds_config_gpt_j_z3.json`
I get the error `RuntimeError: Tensors must be contiguous`. The script runs as expected when replacing `gptj` with `gpt2`. I am using 1 A100 40gb gpu. Thank you for any insight.
### Expected behavior
trainer.predict should infer without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20942/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20941/events
|
https://github.com/huggingface/transformers/pull/20941
| 1,514,000,374
|
PR_kwDOCUB6oc5GXL99
| 20,941
|
Add document token classification pipeline
|
{
"login": "vaishak2future",
"id": 2349706,
"node_id": "MDQ6VXNlcjIzNDk3MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2349706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaishak2future",
"html_url": "https://github.com/vaishak2future",
"followers_url": "https://api.github.com/users/vaishak2future/followers",
"following_url": "https://api.github.com/users/vaishak2future/following{/other_user}",
"gists_url": "https://api.github.com/users/vaishak2future/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vaishak2future/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaishak2future/subscriptions",
"organizations_url": "https://api.github.com/users/vaishak2future/orgs",
"repos_url": "https://api.github.com/users/vaishak2future/repos",
"events_url": "https://api.github.com/users/vaishak2future/events{/privacy}",
"received_events_url": "https://api.github.com/users/vaishak2future/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20941). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20941/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20941",
"html_url": "https://github.com/huggingface/transformers/pull/20941",
"diff_url": "https://github.com/huggingface/transformers/pull/20941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20941.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20940/events
|
https://github.com/huggingface/transformers/pull/20940
| 1,513,909,311
|
PR_kwDOCUB6oc5GW4oS
| 20,940
|
Update run_wav2vec2_pretraining_no_trainer.py
|
{
"login": "Snimm",
"id": 53926889,
"node_id": "MDQ6VXNlcjUzOTI2ODg5",
"avatar_url": "https://avatars.githubusercontent.com/u/53926889?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Snimm",
"html_url": "https://github.com/Snimm",
"followers_url": "https://api.github.com/users/Snimm/followers",
"following_url": "https://api.github.com/users/Snimm/following{/other_user}",
"gists_url": "https://api.github.com/users/Snimm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Snimm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Snimm/subscriptions",
"organizations_url": "https://api.github.com/users/Snimm/orgs",
"repos_url": "https://api.github.com/users/Snimm/repos",
"events_url": "https://api.github.com/users/Snimm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Snimm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20940). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18436 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a [link](https://github.com/huggingface/transformers/issues/18436) if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20940/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20940",
"html_url": "https://github.com/huggingface/transformers/pull/20940",
"diff_url": "https://github.com/huggingface/transformers/pull/20940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20940.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20939/events
|
https://github.com/huggingface/transformers/pull/20939
| 1,513,879,371
|
PR_kwDOCUB6oc5GWyOZ
| 20,939
|
Add X-MOD
|
{
"login": "jvamvas",
"id": 5830820,
"node_id": "MDQ6VXNlcjU4MzA4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvamvas",
"html_url": "https://github.com/jvamvas",
"followers_url": "https://api.github.com/users/jvamvas/followers",
"following_url": "https://api.github.com/users/jvamvas/following{/other_user}",
"gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions",
"organizations_url": "https://api.github.com/users/jvamvas/orgs",
"repos_url": "https://api.github.com/users/jvamvas/repos",
"events_url": "https://api.github.com/users/jvamvas/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvamvas/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR is now ready for review.\r\n\r\nUploaded models:\r\n- https://huggingface.co/jvamvas/xmod-base\r\n- https://huggingface.co/jvamvas/xmod-large-prenorm\r\n- https://huggingface.co/jvamvas/xmod-base-13-125k\r\n- https://huggingface.co/jvamvas/xmod-base-30-125k\r\n- https://huggingface.co/jvamvas/xmod-base-30-195k\r\n- https://huggingface.co/jvamvas/xmod-base-60-125k\r\n- https://huggingface.co/jvamvas/xmod-base-60-265k\r\n- https://huggingface.co/jvamvas/xmod-base-75-125k\r\n- https://huggingface.co/jvamvas/xmod-base-75-269k",
"@younesbelkada Thank you for the swift code review, much appreciated!\r\nI have now implemented your comments.",
"@sgugger Thanks for the review. Your suggestions have now been implemented",
"Can you also add the model to the `documentation_tests.txt` file to and run the doctests to be sure that they are valid?",
"@ArthurZucker Thanks for the code review. I have now implemented the changes you requested.\r\n\r\nI agree that the models should be moved to the [facebook](https://huggingface.co/facebook) organization but do not have the permissions to do so.\r\n",
"About moving the weights, I think I am in the org, and can help with that / ask to add you to transfer them 😉 \r\nLooks very good, almost there! 🚀 ",
"Hi @ArthurZucker, thanks for pointing out that there are missing tests in this PR.\r\nUnfortunately, I have not been able to figure out which tests are missing, exactly.\r\n\r\nAs of now, there are the following tests:\r\n- `tests.models.xmod.test_modeling_xmod.XmodModelTest` – checks that there are no errors when calling the methods of `XmodFor...`, including `model.generate()`\r\n- `tests.models.xmod.test_modeling_xmod.XmodModelIntegrationTest` – checks that the output of the pre-trained models [jvamvas/xmod-base](https://huggingface.co/jvamvas/xmod-base) and [jvamvas/xmod-large-prenorm](https://huggingface.co/jvamvas/xmod-large-prenorm) is identical to the corresponding Fairseq models.\r\n\r\nCould you please clarify which tests need to be added still?",
"Hey! Thanks for bearing with me. \r\n- What is there but should not: a pipeline test inside the `test_modeling` file\r\n- The missing tests : \r\nSomething like what we have in opt , which will be part of the tests.models.xmod.test_modeling_xmod.XmodModelIntegrationTest. You can also have a `class XmodGenerationTest(unittest.TestCase):`\r\nA sample test is the following.\r\n```python \r\n def test_batch_generation(self):\r\n model_id = \"facebook/opt-350m\"\r\n\r\n tokenizer = GPT2Tokenizer.from_pretrained(model_id)\r\n model = OPTForCausalLM.from_pretrained(model_id)\r\n model.to(torch_device)\r\n\r\n tokenizer.padding_side = \"left\"\r\n\r\n # use different length sentences to test batching\r\n sentences = [\r\n \"Hello, my dog is a little\",\r\n \"Today, I\",\r\n ]\r\n\r\n inputs = tokenizer(sentences, return_tensors=\"pt\", padding=True)\r\n input_ids = inputs[\"input_ids\"].to(torch_device)\r\n\r\n outputs = model.generate(\r\n input_ids=input_ids,\r\n attention_mask=inputs[\"attention_mask\"].to(torch_device),\r\n )\r\n\r\n inputs_non_padded = tokenizer(sentences[0], return_tensors=\"pt\").input_ids.to(torch_device)\r\n output_non_padded = model.generate(input_ids=inputs_non_padded)\r\n\r\n num_paddings = inputs_non_padded.shape[-1] - inputs[\"attention_mask\"][-1].long().sum().cpu().item()\r\n inputs_padded = tokenizer(sentences[1], return_tensors=\"pt\").input_ids.to(torch_device)\r\n output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings)\r\n\r\n batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True)\r\n padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True)\r\n\r\n expected_output_sentence = [\r\n \"Hello, my dog is a little bit of a dork.\\nI'm a little bit\",\r\n \"Today, I was in the middle of a conversation with a friend about the\",\r\n ]\r\n self.assertListEqual(expected_output_sentence, batch_out_sentence)\r\n self.assertListEqual(batch_out_sentence, [non_padded_sentence, padded_sentence])\r\n```\r\nDoes that make sense? 😉 \r\n",
"The CI tests are broken but it is not your fault ! We are going to have to wait until the basic docker properly runs, but the added test looks good 😉 ",
"hi @jvamvas !\r\nFor the code quality tests just need to rebase with `main` and run:\r\n```\r\npip install --upgrade -e .[\"quality\"]\r\n```\r\nThen run the usual `make style` or `make fixup`",
"@younesbelkada Sorry about the bad rebase. On the plus side, the tests are now passing again :tada: ",
"Yeah hahah. Do you think you can reset, then rebase instead of merge? 😉 \r\n",
"@ArthurZucker Done. The failing test is not related to this PR",
"Great work! Thanks for working on this model! 🥳 "
] | 1,672
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Add the X-MOD models released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255).
## Implementation notes
- There are nine pre-trained models released in the fairseq repo: https://github.com/facebookresearch/fairseq/tree/main/examples/xmod. I will upload them under my own name and they can be moved to the [facebook](https://huggingface.co/facebook) organization after merging.
- The model code can be adapted from XLM-RoBERTa. Separate code is required due to the language adapters and the pre-norm.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- text models: @ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20939/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20939/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20939",
"html_url": "https://github.com/huggingface/transformers/pull/20939",
"diff_url": "https://github.com/huggingface/transformers/pull/20939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20939.patch",
"merged_at": 1676039527000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20938/events
|
https://github.com/huggingface/transformers/pull/20938
| 1,513,686,989
|
PR_kwDOCUB6oc5GWJJj
| 20,938
|
fix levit timm conversion file
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again for your contribution!"
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes conversion file `convert_levit_timm_to_pytorch.py` for levit
Fixes # (issue)
https://github.com/huggingface/transformers/issues/20937
## Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20938/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20938",
"html_url": "https://github.com/huggingface/transformers/pull/20938",
"diff_url": "https://github.com/huggingface/transformers/pull/20938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20938.patch",
"merged_at": 1673008050000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20937/events
|
https://github.com/huggingface/transformers/issues/20937
| 1,513,686,798
|
I_kwDOCUB6oc5aOQcO
| 20,937
|
Levit conversion file argparser bug
|
{
"login": "Bearnardd",
"id": 43574448,
"node_id": "MDQ6VXNlcjQzNTc0NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bearnardd",
"html_url": "https://github.com/Bearnardd",
"followers_url": "https://api.github.com/users/Bearnardd/followers",
"following_url": "https://api.github.com/users/Bearnardd/following{/other_user}",
"gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions",
"organizations_url": "https://api.github.com/users/Bearnardd/orgs",
"repos_url": "https://api.github.com/users/Bearnardd/repos",
"events_url": "https://api.github.com/users/Bearnardd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bearnardd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Closing since fix was merged."
] | 1,672
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### Who can help?
@NielsRogge
### Reproduction
In the conversion file for the `levit` model (`convert_levit_timm_to_pytorch.py`) there is a wrong usage of `argparse.ArgumentParser()` for the `--push_to_hub` argument. Currently it is handled with the type `bool` but in the argparser `bool` type does not work as expected and the current code always generates `push_to_hub=True` not matter if in the CLI we use `--push_to_hub=False` or `--push_to_hub=True`. More canonical convention of handling boolean values with the argparser is to use two arguments. As an example: `--no-push_to_hub` to store `False` values and `--push_to_hub` to store `True` values.
### Expected behavior
Correct usage of the `argparser`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20937/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20936/events
|
https://github.com/huggingface/transformers/pull/20936
| 1,513,686,052
|
PR_kwDOCUB6oc5GWI8M
| 20,936
|
Fix error message in `WhisperFeatureExtractor`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
cc @ArthurZucker :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20936/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20936",
"html_url": "https://github.com/huggingface/transformers/pull/20936",
"diff_url": "https://github.com/huggingface/transformers/pull/20936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20936.patch",
"merged_at": 1672385857000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20935/events
|
https://github.com/huggingface/transformers/pull/20935
| 1,513,625,365
|
PR_kwDOCUB6oc5GV7qg
| 20,935
|
Add generate kwargs to `AutomaticSpeechRecognitionPipeline`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Narsil it's indeed better this way. Thanks for the explanation!",
"Hi @Narsil,\r\n\r\nSome tests of `ctc_with_lm` models failed. I think we could\r\n\r\n1. Lift `decoder` in `__init__` as an individual argument\r\n2. Add `**kwargs` into `_sanitize_parameters`\r\n\r\nPersonally I prefer the 1st one since the other one may introduce some silent errors. What's your opinion? ",
"> Personally I prefer the 1st one since the other one may introduce some silent errors. What's your opinion?\r\n\r\nIn general I would agree with you. Pipelines accepting so many parameters I would tend to keep it simple, and maybe just change line 183 \r\n\r\n```diff\r\n- self.decoder = kwargs[\"decoder\"]\r\n+ self.decoder = kwargs.pop(\"decoder\")\r\n```\r\n\r\nThis would be just so the signature is kept at a minimum (the docstring should be good) and avoiding accepting `decoder` as a positioned arguments instead of a keyword one. (I know we can do that within the signature, but it does complexify the docs, notably this part: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline\r\n",
"This is the sort of function complexity that I think is more detrimental than helping unfortunately: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate\r\n",
"> In general I would agree with you. Pipelines accepting so many parameters I would tend to keep it simple, and maybe just change line 183\r\n> \r\n> ```diff\r\n> - self.decoder = kwargs[\"decoder\"]\r\n> + self.decoder = kwargs.pop(\"decoder\")\r\n> ```\r\n\r\nThe error occurs in the line 173 where `_sanitize_parameters` is called in parent :(",
"> The error occurs in the line 173 where `_sanitize_parameters` is called in parent :(\r\n\r\nAh so it happens before then, let's do it you way then\r\n\r\ndoes\r\n```\r\n__init__(self, ....,, *args, *, decoder, **kwargs)\r\n```\r\nwork ?\r\n(Try and force to disable positional argument for `decoder` ?\r\n",
"> does\r\n> \r\n> ```\r\n> __init__(self, ....,, *args, *, decoder, **kwargs)\r\n> ```\r\n> \r\n> work ? (Try and force to disable positional argument for `decoder` ?\r\n\r\nNo it's a syntax error :(\r\n\r\nCan we do this ?\r\n\r\n```diff\r\n- def __init__(self, feature_extractor: Union[\"SequenceFeatureExtractor\", str], *args, **kwargs):\r\n+ def __init__(self, feature_extractor: Union[\"SequenceFeatureExtractor\", str], decoder: Optional[Union[\"BeamSearchDecoderCTC\", str]] = None, *args, **kwargs):\r\n```",
"This will interpret `AutomaticSpeecRecognitionPipeline(feature_extractor, model)` and interpret `model` as `decoder` which will lead to confusing errors.\r\n\r\nCan you try :\r\n\r\n```python\r\n+ def __init__(self, feature_extractor: Union[\"SequenceFeatureExtractor\", str], *, decoder: Optional[Union[\"BeamSearchDecoderCTC\", str]] = None, **kwargs):\r\n```\r\nMaybe ?",
"> Can you try :\r\n> \r\n> ```python\r\n> + def __init__(self, feature_extractor: Union[\"SequenceFeatureExtractor\", str], *, decoder: Optional[Union[\"BeamSearchDecoderCTC\", str]] = None, **kwargs):\r\n> ```\r\n> \r\n> Maybe ?\r\n\r\nNo we need `*args` for the line 173",
"> > Can you try :\r\n> > ```python\r\n> > + def __init__(self, feature_extractor: Union[\"SequenceFeatureExtractor\", str], *, decoder: Optional[Union[\"BeamSearchDecoderCTC\", str]] = None, **kwargs):\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Maybe ?\r\n> \r\n> No we need `*args` for the line 173\r\n\r\nRemove it there too. ",
"@Narsil Oups, the commit history seems to be messed up. Let me create a new one!",
"Closed as the other one is cleaner https://github.com/huggingface/transformers/pull/20952"
] | 1,672
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil 👋,
In this PR, I tried to add generate arguments to `AutomaticSpeechRecognitionPipeline` in order to run pipeline with seq2seq models using beam search, contrastive search, etc. I followed the style in [`TextGenerationPipeline`](https://github.com/huggingface/transformers/blob/8637316e5e94ba0a2493e5df7846f2f23f46eaef/src/transformers/pipelines/text2text_generation.py#L73).
```python
import torch
from transformers import pipeline
pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16)
pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", max_new_tokens=5)
# {'text': ' He hoped'}
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20935/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20935",
"html_url": "https://github.com/huggingface/transformers/pull/20935",
"diff_url": "https://github.com/huggingface/transformers/pull/20935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20935.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20934/events
|
https://github.com/huggingface/transformers/issues/20934
| 1,513,508,258
|
I_kwDOCUB6oc5aNk2i
| 20,934
|
Mismatched outputs from encoders of `transformers` and `whisper`
|
{
"login": "JinchaoLove",
"id": 34153355,
"node_id": "MDQ6VXNlcjM0MTUzMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/34153355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JinchaoLove",
"html_url": "https://github.com/JinchaoLove",
"followers_url": "https://api.github.com/users/JinchaoLove/followers",
"following_url": "https://api.github.com/users/JinchaoLove/following{/other_user}",
"gists_url": "https://api.github.com/users/JinchaoLove/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JinchaoLove/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JinchaoLove/subscriptions",
"organizations_url": "https://api.github.com/users/JinchaoLove/orgs",
"repos_url": "https://api.github.com/users/JinchaoLove/repos",
"events_url": "https://api.github.com/users/JinchaoLove/events{/privacy}",
"received_events_url": "https://api.github.com/users/JinchaoLove/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think it's mismatched because I used `WhisperEncoder`, but the checkpoints' keys are with prefix of `encoder.`. When I revised the above code as \r\n```python\r\nenc1 = ppb.WhisperModel.from_pretrained('openai/whisper-small').encoder\r\n```\r\nthe error is dropped to 1e3, but the difference between each element is still about `1e-4`. May I ask why?",
"I've checked the weights from `huggingface` and `openai` and they're same. I think maybe there're some difference between the architectures, could anyone help check?",
"Hey @JinchaoLove! Thanks for opening this issue! I've answered on the HF Hub where the same question was posted: https://huggingface.co/openai/whisper-small/discussions/9#63b6ed4438471ff4c0818f19\r\n\r\nCopying the response here for reference:\r\n\r\nThe proposed method of loading the WhisperEncoder `from_pretrained` is resulting in none of the pre-trained weights being loaded:\r\n```python\r\nimport transformers as ppb\r\n\r\nenc1 = ppb.models.whisper.modeling_whisper.WhisperEncoder.from_pretrained('openai/whisper-small')\r\n```\r\n<details>\r\n\r\n<summary> Warning message: </summary>\r\n\r\n```\r\nSome weights of WhisperEncoder were not initialized from the model checkpoint at openai/whisper-small and are newly initialized: ['model.layers.3.self_attn.v_proj.weight', 'model.layers.6.self_attn_layer_norm.weight', 'model.layers.0.self_attn_layer_norm.bias', 'model.layers.3.final_layer_norm.bias', 'model.layers.2.fc2.weight', 'model.layers.9.fc2.bias', 'model.layers.6.self_attn_layer_norm.bias', 'model.layers.6.self_attn.v_proj.bias', 'model.layers.10.self_attn.q_proj.bias', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.9.fc1.weight', 'model.layers.1.final_layer_norm.weight', 'model.layers.1.self_attn.q_proj.bias', 'model.layers.9.fc1.bias', 'model.layers.1.self_attn.q_proj.weight', 'model.conv2.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.11.self_attn.v_proj.bias', 'model.layers.3.final_layer_norm.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.4.self_attn.out_proj.weight', 'model.layers.11.final_layer_norm.bias', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.final_layer_norm.bias', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.1.fc1.weight', 'model.layers.5.fc2.bias', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.8.self_attn.out_proj.bias', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.6.final_layer_norm.bias', 'model.layers.10.fc1.weight', 'model.layers.11.self_attn_layer_norm.bias', 'model.layers.6.fc1.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.10.final_layer_norm.weight', 'model.layers.7.self_attn.v_proj.bias', 'model.layers.1.self_attn_layer_norm.weight', 'model.layers.3.fc2.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.conv2.bias', 'model.layers.11.self_attn.out_proj.bias', 'model.layers.11.fc2.weight', 'model.layers.0.fc1.bias', 'model.layer_norm.bias', 'model.layers.10.self_attn_layer_norm.weight', 'model.layers.5.fc1.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.5.self_attn.out_proj.weight', 'model.layers.3.self_attn_layer_norm.bias', 'model.layers.3.fc1.weight', 'model.layers.1.self_attn.out_proj.weight', 'model.layers.4.final_layer_norm.bias', 'model.conv1.bias', 'model.layers.5.self_attn.out_proj.bias', 'model.layers.4.self_attn.out_proj.bias', 'model.layers.5.fc2.weight', 'model.layers.6.self_attn.out_proj.bias', 'model.layers.4.final_layer_norm.weight', 'model.layers.10.fc2.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.fc2.weight', 'model.layers.2.self_attn.q_proj.bias', 'model.layers.4.fc1.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.final_layer_norm.weight', 'model.layers.9.self_attn.q_proj.bias', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.0.fc1.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.4.fc1.bias', 'model.layers.7.self_attn.out_proj.weight', 'model.layers.11.fc2.bias', 'model.layers.2.self_attn_layer_norm.bias', 'model.layers.5.fc1.bias', 'model.layers.9.self_attn_layer_norm.bias', 'model.layers.6.fc1.bias', 'model.layers.9.self_attn.v_proj.bias', 'model.layers.6.fc2.weight', 'model.layers.11.final_layer_norm.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.fc2.weight', 'model.layers.7.final_layer_norm.weight', 'model.layers.10.self_attn.out_proj.weight', 'model.layers.5.self_attn.q_proj.bias', 'model.layers.10.self_attn.out_proj.bias', 'model.layers.11.fc1.bias', 'model.layers.2.fc1.weight', 'model.layers.2.final_layer_norm.weight', 'model.layers.7.final_layer_norm.bias', 'model.layers.3.self_attn.v_proj.bias', 'model.layers.4.self_attn.q_proj.bias', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.8.fc2.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.1.final_layer_norm.bias', 'model.layers.2.self_attn_layer_norm.weight', 'model.layers.5.final_layer_norm.weight', 'model.layers.8.self_attn_layer_norm.bias', 'model.layers.7.self_attn.q_proj.bias', 'model.layers.10.self_attn_layer_norm.bias', 'model.layers.5.self_attn.v_proj.bias', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.3.self_attn.out_proj.bias', 'model.layers.9.final_layer_norm.bias', 'model.conv1.weight', 'model.layers.10.fc1.bias', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.1.fc2.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.3.self_attn.out_proj.weight', 'model.layers.8.self_attn.out_proj.weight', 'model.layers.3.fc2.bias', 'model.layers.6.self_attn.q_proj.bias', 'model.layers.7.self_attn.out_proj.bias', 'model.layers.3.fc1.bias', 'model.layers.10.final_layer_norm.bias', 'model.layers.9.final_layer_norm.weight', 'model.layers.1.fc2.bias', 'model.layers.1.fc1.bias', 'model.layers.9.fc2.weight', 'model.layers.7.fc2.bias', 'model.layers.6.self_attn.v_proj.weight', 'model.layer_norm.weight', 'model.layers.8.fc1.bias', 'model.layers.8.self_attn_layer_norm.weight', 'model.layers.7.fc1.weight', 'model.layers.2.self_attn.out_proj.bias', 'model.layers.8.self_attn.v_proj.bias', 'model.layers.6.fc2.bias', 'model.layers.0.fc2.bias', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.8.final_layer_norm.weight', 'model.layers.11.self_attn.q_proj.bias', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.0.self_attn.q_proj.bias', 'model.layers.0.final_layer_norm.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.8.fc2.bias', 'model.layers.0.self_attn_layer_norm.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.7.fc2.weight', 'model.layers.4.self_attn_layer_norm.weight', 'model.layers.6.self_attn.out_proj.weight', 'model.layers.11.self_attn_layer_norm.weight', 'model.layers.5.self_attn_layer_norm.weight', 'model.layers.4.self_attn.v_proj.bias', 'model.layers.5.final_layer_norm.bias', 'model.layers.4.fc2.bias', 'model.layers.9.self_attn.out_proj.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.4.self_attn_layer_norm.bias', 'model.layers.10.fc2.bias', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.0.self_attn.out_proj.bias', 'model.layers.2.self_attn.out_proj.weight', 'model.layers.1.self_attn.out_proj.bias', 'model.layers.7.fc1.bias', 'model.layers.2.fc1.bias', 'model.layers.8.self_attn.q_proj.bias', 'model.layers.10.self_attn.v_proj.bias', 'model.layers.2.fc2.bias', 'model.layers.7.self_attn_layer_norm.bias', 'model.layers.11.self_attn.out_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.2.final_layer_norm.bias', 'model.layers.11.fc1.weight', 'model.layers.3.self_attn_layer_norm.weight', 'model.layers.0.self_attn.out_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.9.self_attn.out_proj.bias', 'model.layers.2.self_attn.v_proj.bias', 'model.layers.9.self_attn_layer_norm.weight', 'model.layers.3.self_attn.q_proj.bias', 'model.layers.1.self_attn_layer_norm.bias', 'model.layers.0.final_layer_norm.bias', 'model.layers.1.self_attn.v_proj.bias', 'model.layers.0.self_attn.v_proj.bias', 'model.layers.7.self_attn_layer_norm.weight', 'model.embed_positions.weight', 'model.layers.5.self_attn_layer_norm.bias', 'model.layers.8.fc1.weight']\r\n```\r\n</details>\r\n\r\nInstead, we should load all of the encoder-decoder weights using `WhisperForConditionalGeneration` and then extract the encoder module. This is the same logic we are using for the OpenAI implementation. When we do so, the maximum element-wise difference between the HF implementation and the OpenAI implementation is `8.5e-5` (to within numerical precision):\r\n\r\n```python\r\nimport torch\r\nfrom transformers import WhisperForConditionalGeneration\r\nimport whisper\r\n\r\nx = torch.randn(1, 80, 3000) # random input feature\r\n\r\nenc1 = WhisperForConditionalGeneration.from_pretrained('openai/whisper-small').model.encoder\r\nenc2 = whisper.load_model('small').encoder\r\n\r\nwith torch.no_grad():\r\n y1 = enc1(x)\r\n y2 = enc2(x)\r\n\r\nprint(torch.max(abs(y1.last_hidden_state - y2)))\r\n```\r\n**Print Output:**\r\n```\r\ntensor(8.5831e-05)\r\n```",
"@sanchit-gandhi That works for me, many thanks!"
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-126-generic-x86_64-with-glibc2.27
- Python version: 3.9.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
import whisper
import transformers as ppb
x = torch.randn(1, 80, 3000) # random input feature
enc1 = ppb.models.whisper.modeling_whisper.WhisperEncoder.from_pretrained('openai/whisper-small')
enc2 = whisper.load_model('small').encoder
y1 = enc1(x)
y2 = enc2(x)
print(torch.sum(abs(y1.last_hidden_state - y2))) # expected 0, but got > 1e6
```
### Expected behavior
It's expected the outputs from encoders of `transformers` and `whisper` are same, but they're different. It seems that there're some weights in `transformers` `from_pretrained` are randomized, may I ask how to solve this problem?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20934/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20933/events
|
https://github.com/huggingface/transformers/pull/20933
| 1,513,456,746
|
PR_kwDOCUB6oc5GVW_J
| 20,933
|
Remove Bert tokenizer dependency from DistillBert (slow/fast) tokenizers
|
{
"login": "IvanLauLinTiong",
"id": 23013350,
"node_id": "MDQ6VXNlcjIzMDEzMzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/23013350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanLauLinTiong",
"html_url": "https://github.com/IvanLauLinTiong",
"followers_url": "https://api.github.com/users/IvanLauLinTiong/followers",
"following_url": "https://api.github.com/users/IvanLauLinTiong/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanLauLinTiong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanLauLinTiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanLauLinTiong/subscriptions",
"organizations_url": "https://api.github.com/users/IvanLauLinTiong/orgs",
"repos_url": "https://api.github.com/users/IvanLauLinTiong/repos",
"events_url": "https://api.github.com/users/IvanLauLinTiong/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanLauLinTiong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
Hi @sgugger,
Fixes https://github.com/huggingface/transformers/issues/19303
- The `BertTokenizer` dependency has been removed from `DistillBerTokenizer`
- The `BertTokenizerFast` dependency has been removed from `DistillBerTokenizerFast`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20933/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20933",
"html_url": "https://github.com/huggingface/transformers/pull/20933",
"diff_url": "https://github.com/huggingface/transformers/pull/20933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20933.patch",
"merged_at": 1672299388000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20932/events
|
https://github.com/huggingface/transformers/issues/20932
| 1,513,430,409
|
I_kwDOCUB6oc5aNR2J
| 20,932
|
OpenAI/Whisper-large-v2 - Transcription & ONNX inference
|
{
"login": "Kirankumar2609",
"id": 85477926,
"node_id": "MDQ6VXNlcjg1NDc3OTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/85477926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kirankumar2609",
"html_url": "https://github.com/Kirankumar2609",
"followers_url": "https://api.github.com/users/Kirankumar2609/followers",
"following_url": "https://api.github.com/users/Kirankumar2609/following{/other_user}",
"gists_url": "https://api.github.com/users/Kirankumar2609/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kirankumar2609/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kirankumar2609/subscriptions",
"organizations_url": "https://api.github.com/users/Kirankumar2609/orgs",
"repos_url": "https://api.github.com/users/Kirankumar2609/repos",
"events_url": "https://api.github.com/users/Kirankumar2609/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kirankumar2609/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @Kirankumar2609! Thanks for linking the reproducible code snippet!\r\n\r\nThe fact that audios are not transcribed beyond the 30s mark is not a bug with the Whisper model. Rather, it's a pre-defined characteristic of the system. OpenAI designed Whisper such that all input audio sequences are padded / truncated to 30s prior to being passed to the model. This way, the model is only ever required to deal with inputs of fixed length (30s).\r\n\r\n<details>\r\n\r\n<summary> \r\n\r\nExcerpt from [blog post](https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor):\r\n\r\n</summary>\r\n\r\n> Samples shorter than 30s are padded to 30s by appending zeros to the end of the sequence (zeros in an audio signal corresponding to no signal or silence). Samples longer than 30s are truncated to 30s. Since all elements in the batch are padded/truncated to a maximum length in the input space, we don't require an attention mask when forwarding the audio inputs to the Whisper model. Whisper is unique in this regard - with most audio models, you can expect to provide an attention mask that details where sequences have been padded, and thus where they should be ignored in the self-attention mechanism. Whisper is trained to operate without an attention mask and infer directly from the speech signals where to ignore the inputs.\r\n\r\n</details>\r\n\r\nSo, when audio samples longer than 30s are not transcribed it's due to the fact that the audio inputs are being truncated to 30s. This is somewhat suboptimal for a generalisable ASR system: ideally, we want a system that can handle audio inputs of arbitrary length! This is where `pipeline` comes it. `pipeline` chunks the audio samples into 30s blocks, generates the transcriptions for each chunk, and uses a novel 'stitching' algorithm to piece the transcriptions together. This way, we can transcribe audios of arbitrary length!\r\n\r\nThe code snippet you've provided performs **one forward pass** of the ONNX Whisper model. That is why you're required to pass the `decoder_input_ids` as well as the `input_features`. For **auto-regressive generation**, we only require the `input_features`. We perform one forward pass of the encoder and auto-regressively generate using the decoder. Could you ask in the [optimum](https://github.com/huggingface/optimum) repository if you require help getting this to work with the exported ONNX model please? The corresponding transformers code can be found here: https://github.com/openai/whisper/discussions/654",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.12
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.10.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
[OpenAI-Whisper_ONNX_Implementation.zip](https://github.com/huggingface/transformers/files/10317927/OpenAI-Whisper_ONNX_Implementation.zip)
### Expected behavior
Audio is not transcribed/translated post the 30 second mark. If the method mentioned in https://huggingface.co/openai/whisper-large-v2/discussions/7#6398809b11095028d87b16a2 is followed, there is no issue. But if the method mentioned in the model card is followed, the above error arises. I need this to be solved as the ONNX version requires input ('input_features', 'decoder_input_ids') in the form of arrays.
Also, if I use the model.onnx (as shown in the attached zip file) for prediction, it returns an array with float values. Can you help in decoding those values to the transcribed/translated text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20932/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20931/events
|
https://github.com/huggingface/transformers/issues/20931
| 1,513,404,844
|
I_kwDOCUB6oc5aNLms
| 20,931
|
Problem with MBartForConditionalGeneration and MBart50TokenizerFast
|
{
"login": "sunny3",
"id": 25152817,
"node_id": "MDQ6VXNlcjI1MTUyODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/25152817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunny3",
"html_url": "https://github.com/sunny3",
"followers_url": "https://api.github.com/users/sunny3/followers",
"following_url": "https://api.github.com/users/sunny3/following{/other_user}",
"gists_url": "https://api.github.com/users/sunny3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunny3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunny3/subscriptions",
"organizations_url": "https://api.github.com/users/sunny3/orgs",
"repos_url": "https://api.github.com/users/sunny3/repos",
"events_url": "https://api.github.com/users/sunny3/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunny3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker ",
"Maybe due to this ? https://github.com/huggingface/transformers/issues/20610#issuecomment-1407704129",
"The model should be predicting in Russian, and the only reason it is not is probably because `kk` language has a very small dataset size and was thus not trained a lot. The model and the generation process work as they function as expected for lanugages that have a bigger dataset. \r\n\r\nFor example if you use `ja_XX` you will get `['世界の友達']` which means friend of the world (Sekai no tomodachi). \r\nIt comes down to the fine-tuning and which pair was trained with which other. "
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### System Info
sys info:
- `transformers` version: 4.25.1
- Platform: Linux-5.4.17-2136.308.9.el8uek.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
sentence_kz = "сәлем досым"
tokenizer.src_lang = "kk_KZ"
encoded_kk = tokenizer(sentence_kz , return_tensors="pt")
generated_tokens = model.generate(
**encoded_kk,
forced_bos_token_id=tokenizer.lang_code_to_id["ru_RU"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
### Expected behavior
I use the official example of code from https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt
I just changed the abbreviations for the languages (hi_IN -> kk_KZ, fr_XX -> ru_RU), wanting to get the Russian language out, but I get english instead of russian

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20931/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20930/events
|
https://github.com/huggingface/transformers/issues/20930
| 1,513,370,526
|
I_kwDOCUB6oc5aNDOe
| 20,930
|
opt-13b checkpoint missing final_layer_norm weights in pretrained checkpoint
|
{
"login": "zw123han",
"id": 67124639,
"node_id": "MDQ6VXNlcjY3MTI0NjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/67124639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zw123han",
"html_url": "https://github.com/zw123han",
"followers_url": "https://api.github.com/users/zw123han/followers",
"following_url": "https://api.github.com/users/zw123han/following{/other_user}",
"gists_url": "https://api.github.com/users/zw123han/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zw123han/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zw123han/subscriptions",
"organizations_url": "https://api.github.com/users/zw123han/orgs",
"repos_url": "https://api.github.com/users/zw123han/repos",
"events_url": "https://api.github.com/users/zw123han/events{/privacy}",
"received_events_url": "https://api.github.com/users/zw123han/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hey, there is indeed an issue with the `FlaxWeights`, if you try using `from_pt = True` you should have the correct layers loaded. There are actually 2 different checkpoints online, one is sharded and the other is not. This is pretty strange 😓 ",
"Hi Arthur, thanks for the help! Managed to convert PyTorch weights and all of them are there!\r\n\r\nAlthough, for anyone else using `_do_init = False` in `from_pretrained`, I had to convert and save to Flax weights first to a local dir as the `from_pt = True` flag will attempt to internally call `model.params`. "
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### System Info
Hello. Apologies if this has been brought up before, but it seems at least the flax version of opt-13b when using from_pretrained is missing weights for:
- model/decoder/final_layer_norm/bias
- model/decoder/final_layer_norm/scale
Every other version of the model I've tested didn't have this issue. You can see the message below.

### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using a modified version of `run_clm_flax.py` under `examples/flax/language-modelling`, although my modification below should not make an impact and the default script should have the same issue.
`config = AutoConfig.from_pretrained(
'facebook/opt-13b',
)
`
`model, params = FlaxAutoModelForCausalLM.from_pretrained(
'facebook/opt-13b',
config=config,
seed=42,
_do_init = False,
)`
`params = model.init_weights(model.key, model.input_shape, params).unfreeze()` # inside a jit function
### Expected behavior
You will notice the message above which shows missing weights for only the last layer of this specific model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20930/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20930/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20929/events
|
https://github.com/huggingface/transformers/pull/20929
| 1,513,112,580
|
PR_kwDOCUB6oc5GUNFa
| 20,929
|
Remove non-breaking spaces
|
{
"login": "aphedges",
"id": 14283972,
"node_id": "MDQ6VXNlcjE0MjgzOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14283972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aphedges",
"html_url": "https://github.com/aphedges",
"followers_url": "https://api.github.com/users/aphedges/followers",
"following_url": "https://api.github.com/users/aphedges/following{/other_user}",
"gists_url": "https://api.github.com/users/aphedges/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aphedges/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aphedges/subscriptions",
"organizations_url": "https://api.github.com/users/aphedges/orgs",
"repos_url": "https://api.github.com/users/aphedges/repos",
"events_url": "https://api.github.com/users/aphedges/events{/privacy}",
"received_events_url": "https://api.github.com/users/aphedges/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR removes non-breaking spaces in various places in the codebase. The first commit was from when I first found the problem over a year ago, and the second commit fixes all other non-breaking spaces in the repository as of now.
I'm not sure of a good check to prevent this going forward, but it at least fixes the problem as it exists now.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
Documentation: @sgugger and @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20929/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20929",
"html_url": "https://github.com/huggingface/transformers/pull/20929",
"diff_url": "https://github.com/huggingface/transformers/pull/20929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20929.patch",
"merged_at": 1672297960000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20928/events
|
https://github.com/huggingface/transformers/pull/20928
| 1,513,109,166
|
PR_kwDOCUB6oc5GUMVR
| 20,928
|
Convert assertions to exceptions in some examples
|
{
"login": "aphedges",
"id": 14283972,
"node_id": "MDQ6VXNlcjE0MjgzOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14283972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aphedges",
"html_url": "https://github.com/aphedges",
"followers_url": "https://api.github.com/users/aphedges/followers",
"following_url": "https://api.github.com/users/aphedges/following{/other_user}",
"gists_url": "https://api.github.com/users/aphedges/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aphedges/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aphedges/subscriptions",
"organizations_url": "https://api.github.com/users/aphedges/orgs",
"repos_url": "https://api.github.com/users/aphedges/repos",
"events_url": "https://api.github.com/users/aphedges/events{/privacy}",
"received_events_url": "https://api.github.com/users/aphedges/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The issue was aimed at the modules in the library, not the examples. I'd keep the examples as they are for now.",
"@sgugger, I was not aware of that! Can you mention that prominently in the issue (#12789) so others know to avoid them?\r\n\r\nAlso, I modified the initial comment because I had forgotten to link the issue in the first place, even though I had made a note to before I created this PR.",
"Edited my comments on the issue to reflect this.",
"Thanks! I'm closing PR now, then."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
For #12789.
This PR converts assertions to exceptions in some example files in `/examples/pytorch/language-modeling/`. I found this commit locally from over a year ago, so new scripts have been added since it was created.
## Who can review?
Maintained examples (not research project or legacy):
- PyTorch: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20928/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20928",
"html_url": "https://github.com/huggingface/transformers/pull/20928",
"diff_url": "https://github.com/huggingface/transformers/pull/20928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20928.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20927/events
|
https://github.com/huggingface/transformers/pull/20927
| 1,513,054,701
|
PR_kwDOCUB6oc5GUAjm
| 20,927
|
Generate: TF XLA beam sample
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
MEMBER
| null |
# What does this PR do?
## Context
This is a 2-in-1 PR. While working on adding `generation_config` to TF's `generate`, I noticed that I would have twice the work. This is because on `main` we have `generate()` and `_generate()`, where the former is the legacy version that calls the latter except for beam sample (which is not XLA compatible before this PR). As such, this PR completes the transition to XLA and removes most legacy code, simplifying the transition to the generation config.
## Changes
1. Replaces `generate()` by `_generate()`, which was the original goal of the XLA refactor (and will make my life easier);
2. Deletes many private functions that are no longer reached;
3. Updates RAG accordingly (from the old beam search to the XLA-compatible beam search), slow tests are passing;
4. ⚠️ Adds beam sample on the existing `beam_search` function. Unlike PT implementation, this is NOT a stand-alone function. This was a deliberate decision to decrease maintenance costs, as I don't think it would be wise to add ~500 lines of code for a functionality that is infrequently used, and can be solved with a few extra lines.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20927/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20927",
"html_url": "https://github.com/huggingface/transformers/pull/20927",
"diff_url": "https://github.com/huggingface/transformers/pull/20927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20927.patch",
"merged_at": 1672655144000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20926/events
|
https://github.com/huggingface/transformers/pull/20926
| 1,512,952,967
|
PR_kwDOCUB6oc5GTqUw
| 20,926
|
Adds type checking to PreTrainedConfig.
|
{
"login": "mmcdermott",
"id": 470751,
"node_id": "MDQ6VXNlcjQ3MDc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/470751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmcdermott",
"html_url": "https://github.com/mmcdermott",
"followers_url": "https://api.github.com/users/mmcdermott/followers",
"following_url": "https://api.github.com/users/mmcdermott/following{/other_user}",
"gists_url": "https://api.github.com/users/mmcdermott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmcdermott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmcdermott/subscriptions",
"organizations_url": "https://api.github.com/users/mmcdermott/orgs",
"repos_url": "https://api.github.com/users/mmcdermott/repos",
"events_url": "https://api.github.com/users/mmcdermott/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmcdermott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Mmm, the tests are all failing for weird reasons. It seems the branch you are using for the PR is pretty outdated compared to main. Could you do a quick rebase?",
"Huh... sorry, I thought I corrected that before pushing. I'll do a rebase and get things squared away. ",
"Thanks a lot!"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes [# (issue)](https://github.com/huggingface/transformers/issues/20915)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20926/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20926",
"html_url": "https://github.com/huggingface/transformers/pull/20926",
"diff_url": "https://github.com/huggingface/transformers/pull/20926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20926.patch",
"merged_at": 1672385701000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20925/events
|
https://github.com/huggingface/transformers/pull/20925
| 1,512,926,064
|
PR_kwDOCUB6oc5GTkp0
| 20,925
|
Add: doc page for the object detection task
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This PR replaces the https://github.com/huggingface/transformers/pull/20874 ",
"To preserve the discussion, here's @sayakpaul 's comment relevant to the CI issue: https://github.com/huggingface/transformers/pull/20874#issuecomment-1366717321",
"_The documentation is not available anymore as the PR was closed or merged._",
"> * Make sure to split long code samples in several smaller ones with text introducing each step. The evaluation in particular is too long and may be too technical for this guide.\r\n\r\nI agree that the evaluation part is a bit too technical, but unfortunately at the moment, there's no simpler way (hopefully soon there will be an easier way to have coco evaluation metrics). But I can certainly split it up somewhat. \r\n",
"Thank you for the feedback @sayakpaul !\r\n > Do you have a Colab Notebook where this code has been tested (preferably with outputs)?\r\n\r\nYes, I do. Here's my playground notebook with outputs. All the code examples are working. The only issue is that I didn't really pay too much attention to the hyperparameters, so the resulting model isn't very good. It would probably improve with more epochs and better learning rate decay. But I ran out of free GPU in Colab today :D\r\nhttps://colab.research.google.com/drive/1wPTZJajGRhhh00Lnz7-8E5qE1x_qL1Of#scrollTo=5w2lsRRYPXDN\r\n\r\n\r\n",
"@NielsRogge note that finetune/fine-tune has no decided standard in the doc/transformers and both are used equally."
] | 1,672
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
This is a PR for the [#20805](https://github.com/huggingface/transformers/issues/20805) issue.
The guide has content and working code examples for:
*
Introduction
*
Loading CPPE-5 dataset from Hub
*
Preprocessing both images and annotations. Images are augmented, and annotations are reformatted to be in the format DETR expects
*
Training with Trainer
*
Evaluation
*
Inference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20925/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20925",
"html_url": "https://github.com/huggingface/transformers/pull/20925",
"diff_url": "https://github.com/huggingface/transformers/pull/20925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20925.patch",
"merged_at": 1672839398000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20924/events
|
https://github.com/huggingface/transformers/issues/20924
| 1,512,765,499
|
I_kwDOCUB6oc5aKvg7
| 20,924
|
Getting different result with different batch size and sequence length
|
{
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @JaheimLee 👋 \r\n\r\nYes, minor fluctuations are to be expected. Their causes include (but are not limited to) the numerical masking from the attention mask and the order of operations in fp32 computations.\r\n\r\nBecause of these fluctuations, we typically consider results correct if their are within 1e-5 of each other, in examples like yours :)",
"Ok,thanks!"
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
Here is the code:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
from transformers import BertTokenizer, BertModel
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
model = model.to(device)
model.eval()
text1 = [
"Replace me by any text you'd like.",
# "The weather is great!"
]
encoded_input1 = tokenizer(text1, return_tensors='pt', padding=True)
encoded_input1 = encoded_input1.to(device)
with torch.no_grad():
output1 = model(**encoded_input1).last_hidden_state
cls1 = output1[:, 0, :]
text2 = [
"Replace me by any text you'd like.",
"The weather is great!"
# "The result changed with different batch size and sequence length"
]
encoded_input2 = tokenizer(text2, return_tensors='pt', padding=True)
encoded_input2 = encoded_input2.to(device)
with torch.no_grad():
output2 = model(**encoded_input2).last_hidden_state
cls2 = output2[:, 0, :]
text3 = [
"Replace me by any text you'd like.",
# "The weather is great!"
"The result is changed with different batch size or sequence length."
]
encoded_input3 = tokenizer(text3, return_tensors='pt', padding=True)
encoded_input3 = encoded_input3.to(device)
with torch.no_grad():
output3 = model(**encoded_input3).last_hidden_state
cls3 = output3[:, 0, :]
print(torch.equal(cls1[0], cls2[0]))
print(torch.equal(cls1[0], cls3[0]))
print(torch.equal(cls2[0], cls3[0]))
```
All of these results are False. Is it as expected?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20924/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20923/events
|
https://github.com/huggingface/transformers/issues/20923
| 1,512,744,889
|
I_kwDOCUB6oc5aKqe5
| 20,923
|
Encoding parameter for AutoModel.from_pretrained() module.
|
{
"login": "Prasath2001",
"id": 49594621,
"node_id": "MDQ6VXNlcjQ5NTk0NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/49594621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Prasath2001",
"html_url": "https://github.com/Prasath2001",
"followers_url": "https://api.github.com/users/Prasath2001/followers",
"following_url": "https://api.github.com/users/Prasath2001/following{/other_user}",
"gists_url": "https://api.github.com/users/Prasath2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Prasath2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Prasath2001/subscriptions",
"organizations_url": "https://api.github.com/users/Prasath2001/orgs",
"repos_url": "https://api.github.com/users/Prasath2001/repos",
"events_url": "https://api.github.com/users/Prasath2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Prasath2001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report.\r\nPlease provide us with a reproducing example showing how a model saved with `save_pretrained` can't be reloaded with `from_pretrained`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### Feature request
transformers.AutoModel.from_pretrained() module allows to load pretrained models from local directories as well. The local pickle files path is an argument of the above function. These files can have different encoding types.
A parameter called 'encoding' can be added in the parameter list similar to
pandas.read_csv('path/to/csv/file',encoding='utf-8')
which takes encoding type as parameter. But this proposed encoding feature must be enabled only while loading local files and should not take parameters while loading models from other sources (like HF-Hub etc).
### Motivation
I face Unicode-Decode error while loading pickled model files from local directory. To overcome this issue, the above feature would help to avoid the unicode-decode error.
### Your contribution
Unfortunately, I can't contribute now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20923/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20922/events
|
https://github.com/huggingface/transformers/issues/20922
| 1,512,743,059
|
I_kwDOCUB6oc5aKqCT
| 20,922
|
Encoding parameter
|
{
"login": "PrasathMuru",
"id": 121091467,
"node_id": "U_kgDOBze1iw",
"avatar_url": "https://avatars.githubusercontent.com/u/121091467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrasathMuru",
"html_url": "https://github.com/PrasathMuru",
"followers_url": "https://api.github.com/users/PrasathMuru/followers",
"following_url": "https://api.github.com/users/PrasathMuru/following{/other_user}",
"gists_url": "https://api.github.com/users/PrasathMuru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrasathMuru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrasathMuru/subscriptions",
"organizations_url": "https://api.github.com/users/PrasathMuru/orgs",
"repos_url": "https://api.github.com/users/PrasathMuru/repos",
"events_url": "https://api.github.com/users/PrasathMuru/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrasathMuru/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Duplicate of #20923"
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### Feature request
##transformers.AutoModel.from_pretrained()## module allows to load pretrained models from local directories as well. The local pickle files path is an argument of the above function. These files can have different encoding types.
A parameter called 'encoding' can be added in the parameter list similar to
pandas.read_csv('path/to/csv/file',encoding='utf-8')
which takes encoding type as parameter. But this proposed encoding feature must be enabled only while loading local files and should not take parameters while loading models from other sources (like HF-Hub etc).
### Motivation
I face Unicode-Decode error while loading pickled model files from local directory. To overcome this issue, the above feature would help to avoid the unicode-decode error.
### Your contribution
Unfortunately, I can't contribute now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20922/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20921/events
|
https://github.com/huggingface/transformers/pull/20921
| 1,512,597,155
|
PR_kwDOCUB6oc5GSdMl
| 20,921
|
add AltCLIP
|
{
"login": "shunxing1234",
"id": 33774367,
"node_id": "MDQ6VXNlcjMzNzc0MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/33774367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shunxing1234",
"html_url": "https://github.com/shunxing1234",
"followers_url": "https://api.github.com/users/shunxing1234/followers",
"following_url": "https://api.github.com/users/shunxing1234/following{/other_user}",
"gists_url": "https://api.github.com/users/shunxing1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shunxing1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shunxing1234/subscriptions",
"organizations_url": "https://api.github.com/users/shunxing1234/orgs",
"repos_url": "https://api.github.com/users/shunxing1234/repos",
"events_url": "https://api.github.com/users/shunxing1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/shunxing1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20921). All of your documentation changes will be reflected on that endpoint."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20921/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20921",
"html_url": "https://github.com/huggingface/transformers/pull/20921",
"diff_url": "https://github.com/huggingface/transformers/pull/20921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20921.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20920/events
|
https://github.com/huggingface/transformers/pull/20920
| 1,512,564,768
|
PR_kwDOCUB6oc5GSWIl
| 20,920
|
Load the state dict on CPU to prevent unnecessary GPU memory surge
|
{
"login": "HarshTrivedi",
"id": 3285313,
"node_id": "MDQ6VXNlcjMyODUzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3285313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarshTrivedi",
"html_url": "https://github.com/HarshTrivedi",
"followers_url": "https://api.github.com/users/HarshTrivedi/followers",
"following_url": "https://api.github.com/users/HarshTrivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/HarshTrivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HarshTrivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HarshTrivedi/subscriptions",
"organizations_url": "https://api.github.com/users/HarshTrivedi/orgs",
"repos_url": "https://api.github.com/users/HarshTrivedi/repos",
"events_url": "https://api.github.com/users/HarshTrivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/HarshTrivedi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When loading the best checkpoint after training is finished, the code loads the weights in a `state_dict` on a GPU _before_ applying them on the model. This means that the weights use 2X GPU memory actually required, 1X for the model object, and 1X for the state_dict. This PR fixes it by using `map_location="cpu"` for loading the weights in `state_dict`.
Without this fix, one can get OOM even after the full training is done as I did. I've encountered [this issue](https://github.com/allenai/allennlp/pull/5518) before on Allennlp as well. That was also fixed in the same fashion. It's also mentioned in the pytorch docs [here](https://pytorch.org/docs/stable/generated/torch.load.html) (see the note about GPU RAM surge).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20920/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20920",
"html_url": "https://github.com/huggingface/transformers/pull/20920",
"diff_url": "https://github.com/huggingface/transformers/pull/20920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20920.patch",
"merged_at": 1672298284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20919/events
|
https://github.com/huggingface/transformers/issues/20919
| 1,512,319,096
|
I_kwDOCUB6oc5aJCh4
| 20,919
|
ModuleNotFoundError: No module named 'evaluate'
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to `pip install evaluate`, as the error message tells you. This is also in the [requirements](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/requirements.txt) for this example.",
"First run the pip install-r requirements.txt command then still if you won't find out the module then install individual module pip install evaluate @ucas010 ",
"Closing this issue as it seems resolved. Feel free to reopen if needed."
] | 1,672
| 1,673
| 1,673
|
NONE
| null |
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling
```
python run_mlm.py \
--model_name_or_path roberta-base \
--train_file path_to_train_file \
--validation_file path_to_validation_file \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-mlm
line 35, in <module>
import evaluate
ModuleNotFoundError: No module named 'evaluate'
myhugBert.sh:行4: --train_file: 未找到命令
myhugBert.sh:行11: --output_dir: 未找到命令
```
### Expected behavior
overcome the bug
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20919/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20918/events
|
https://github.com/huggingface/transformers/issues/20918
| 1,512,205,780
|
I_kwDOCUB6oc5aIm3U
| 20,918
|
Unable to save t5 model locally after training t5 using run_t5_mlm_flax.py
|
{
"login": "patelvishwa112",
"id": 31246787,
"node_id": "MDQ6VXNlcjMxMjQ2Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/31246787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patelvishwa112",
"html_url": "https://github.com/patelvishwa112",
"followers_url": "https://api.github.com/users/patelvishwa112/followers",
"following_url": "https://api.github.com/users/patelvishwa112/following{/other_user}",
"gists_url": "https://api.github.com/users/patelvishwa112/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patelvishwa112/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patelvishwa112/subscriptions",
"organizations_url": "https://api.github.com/users/patelvishwa112/orgs",
"repos_url": "https://api.github.com/users/patelvishwa112/repos",
"events_url": "https://api.github.com/users/patelvishwa112/events{/privacy}",
"received_events_url": "https://api.github.com/users/patelvishwa112/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @patelvishwa112! Sorry for the late reply here! The fine-tuned checkpoint is saved periodically every `save_steps` training steps:\r\nhttps://github.com/huggingface/transformers/blob/12313838d33373d06d35b48c3c501fa832f16443/examples/flax/language-modeling/run_t5_mlm_flax.py#L950\r\n\r\nIt looks as though you're setting `save_steps=10000`, but you're only training for 156 train steps. Since your maximum number of train steps is less than your `save_steps`, we never hit the minimum number of steps required to save the model!\r\n\r\nIf you set:\r\n```\r\n--save_steps=\"50\"\r\n```\r\nYou should see that the model is saved every 50 steps. Since this is less than our total number of train steps, we should see that the model is saved during training for a total of 3 times: at 50, 100 and 150 train steps respectively.\r\n\r\nIt would indeed be nice to update the examples to save the model at the end of training (irrespective of the value for `save_steps`). Feel free to open a PR for this change if you're interested! I'd be more than happy to help guide you through the process and help with the integration!",
"Thank you @sanchit-gandhi for the help. I was able to run the code successfully and it generated flax_model.msgpack (~800MB) file amoung others. \r\n\r\nCan you tell me how can I import this model using either transformers or tensorflow to either get embedding from encoder or use it for text generation? \r\n\r\nAnd for creating PR request, I will create one just to update the example so that everyone can follow it without error and I will reach out to you if I need any assistance :). ",
"Hey @patelvishwa112!\r\n\r\nYou should be able to load the Flax model using:\r\n\r\n```python\r\nfrom transformers import FlaxT5ForConditionalGeneration\r\n\r\nmodel = FlaxT5ForConditionalGeneration.from_pretrained(<path to your checkpoint>)\r\n```\r\n\r\nLooking at your training args, the model weights should be saved under `\"./t5-trained\"`, so this is the path to your checkpoint.\r\n\r\nHere's an example of how you can get the encoder embeddings: https://huggingface.co/docs/transformers/model_doc/t5#transformers.FlaxT5ForConditionalGeneration.encode.example\r\n\r\nAnd an example of how you can generate a sequence of text outputs using the Flax T5 model: https://huggingface.co/docs/transformers/model_doc/t5#transformers.FlaxT5ForConditionalGeneration.__call__.example\r\n\r\nHope that helps! Let me know if you have any other questions regarding how to use the trained Flax T5 model for inference 🤗\r\n\r\nThat sounds good regarding the PR - feel free to open one with the changes required to save the model at the end of training. You can tag me in the PR for a review! Feel free to reach out if you have any questions on the PR - I'm more than happy to help if you have any questions regarding the changes!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
### System Info
I want to develop a POC to train a t5 model on a domain dataset (txt file where each line is a sentence). I came across the run_t5_mlm_flax.py file and followed the steps mentioned in the README file.
After a lot of trial and error, I finally was able to get it running in colab using GPU (using 8 batchsize). After running successfully, I am unable to find the saved model anywhere locally within colab (checked provided output directory). Can anyone help me overcome this issue?
This is the command I am using to run the file (t5-trained is a folder I created during runtime):
**python run_t5_mlm_flax.py --output_dir="./t5-trained" --model_type="t5-small" --config_name="./t5-trained" --tokenizer_name="./t5-trained" --train_file="Input_Sent.txt" --max_seq_length="512" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="2500"**
<img width="513" alt="Screen Shot 2022-12-27 at 4 15 47 PM" src="https://user-images.githubusercontent.com/31246787/209727198-44a9b280-3a2b-43e7-893f-48e672095a90.png">
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Create a small dataset of sentences and save it in txt file. Each line in txt file is a sentence.
2. Create a folder named "t5-trained"
3. Run code to generate tokenizer file using t5_tokenizer_model.py and code mentioned in README.
4. Generate config file using code mentioned in README file.
5. Run this command to train the t5 model -> python run_t5_mlm_flax.py --output_dir="./t5-trained" --model_type="t5-small" --config_name="./t5-trained" --tokenizer_name="./t5-trained" --train_file="Input_Sent.txt" --max_seq_length="512" --per_device_train_batch_size="8" --per_device_eval_batch_size="8" --adafactor --learning_rate="0.005" --weight_decay="0.001" --warmup_steps="2000" --overwrite_output_dir --logging_steps="500" --save_steps="10000" --eval_steps="2500"
### Expected behavior
Once these steps are completed, I am expecting a saved model somewhere locally which I can import and utilize for text generation or embedding generation from the encoder.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20918/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20917/events
|
https://github.com/huggingface/transformers/issues/20917
| 1,512,156,474
|
I_kwDOCUB6oc5aIa06
| 20,917
|
[i18n-<ao>] Translating docs to <am>
|
{
"login": "arabaman",
"id": 110045234,
"node_id": "U_kgDOBo8oMg",
"avatar_url": "https://avatars.githubusercontent.com/u/110045234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arabaman",
"html_url": "https://github.com/arabaman",
"followers_url": "https://api.github.com/users/arabaman/followers",
"following_url": "https://api.github.com/users/arabaman/following{/other_user}",
"gists_url": "https://api.github.com/users/arabaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arabaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arabaman/subscriptions",
"organizations_url": "https://api.github.com/users/arabaman/orgs",
"repos_url": "https://api.github.com/users/arabaman/repos",
"events_url": "https://api.github.com/users/arabaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/arabaman/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"Hi @arabaman. Could you please fill the template with your language?"
] | 1,672
| 1,672
| null |
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20917/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20916/events
|
https://github.com/huggingface/transformers/issues/20916
| 1,512,145,870
|
I_kwDOCUB6oc5aIYPO
| 20,916
|
Learning rate is set to zero for the entirety of the first epoch
|
{
"login": "mmcdermott",
"id": 470751,
"node_id": "MDQ6VXNlcjQ3MDc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/470751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmcdermott",
"html_url": "https://github.com/mmcdermott",
"followers_url": "https://api.github.com/users/mmcdermott/followers",
"following_url": "https://api.github.com/users/mmcdermott/following{/other_user}",
"gists_url": "https://api.github.com/users/mmcdermott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmcdermott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmcdermott/subscriptions",
"organizations_url": "https://api.github.com/users/mmcdermott/orgs",
"repos_url": "https://api.github.com/users/mmcdermott/repos",
"events_url": "https://api.github.com/users/mmcdermott/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmcdermott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Diving deeper into this, I think this is due to a mismatch between pytorch lightning's design and huggingface's. Huggingface trainer. [Huggingface's trainer calls the lr scheduler every step](https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/trainer.py#L1845), and [pytorch lightning can be configured to either call it once per step or once per epoch](https://github.com/Lightning-AI/lightning/blob/612d43e5bf38ba73b4f372d64594c2f9a32e6d6a/src/pytorch_lightning/loops/epoch/training_epoch_loop.py#L407), so I likely have something configured wrong."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/optimization.py#L210
Both in practice and based on my understanding of the code, this will produce a `LambdaLR` that will return a multiplicative factor of 0 for the entirety of the first epoch (as `current_step` will be 0), which will mean that the entire first epoch the model will not train.
The names given to variables in the code imply this might be intended to be set with training steps, not epochs; is that the desire? If not, should this be modified to account for the first epoch having `current_step` equal to 0? Or is something wrong in my specific use-case?
My use-case is in using the `get_polynomial_decay_schedule_with_warmup` as a scheduler in a pytorch lightning module. Note I also mentioned this on the forum, here: https://discuss.huggingface.co/t/huggingface-lr-decay-schedulers-spend-the-first-epoch-w-an-lr-of-0/28195
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20916/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20915/events
|
https://github.com/huggingface/transformers/issues/20915
| 1,512,100,489
|
I_kwDOCUB6oc5aINKJ
| 20,915
|
Comparing a huggingface config with a dictionary raises an error as the `__eq__` method relies on `other` having a `__dict__` method
|
{
"login": "mmcdermott",
"id": 470751,
"node_id": "MDQ6VXNlcjQ3MDc1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/470751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmcdermott",
"html_url": "https://github.com/mmcdermott",
"followers_url": "https://api.github.com/users/mmcdermott/followers",
"following_url": "https://api.github.com/users/mmcdermott/following{/other_user}",
"gists_url": "https://api.github.com/users/mmcdermott/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmcdermott/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmcdermott/subscriptions",
"organizations_url": "https://api.github.com/users/mmcdermott/orgs",
"repos_url": "https://api.github.com/users/mmcdermott/repos",
"events_url": "https://api.github.com/users/mmcdermott/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmcdermott/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Would you like to make a PR with such a change?"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
`PreTrainedConfig`s inherit their `__eq__` methods from this: https://github.com/huggingface/transformers/blob/3f936df66287f557c6528912a9a68d7850913b9b/src/transformers/configuration_utils.py#L736
This works fine for comparing two configs, but if you compare a config to something of a different type, (in particular a type without a `__dict__` attribute, like a pure `dict`), it throws an error. It would be easy to add a type check into the equals comparison to ensure that off-type comparisons return False instead.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20915/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20914/events
|
https://github.com/huggingface/transformers/issues/20914
| 1,512,091,381
|
I_kwDOCUB6oc5aIK71
| 20,914
|
AttributeError: 'DummyVecEnv' object has no attribute 'render_mode'
|
{
"login": "felipeoliverai",
"id": 58439493,
"node_id": "MDQ6VXNlcjU4NDM5NDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/58439493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felipeoliverai",
"html_url": "https://github.com/felipeoliverai",
"followers_url": "https://api.github.com/users/felipeoliverai/followers",
"following_url": "https://api.github.com/users/felipeoliverai/following{/other_user}",
"gists_url": "https://api.github.com/users/felipeoliverai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felipeoliverai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felipeoliverai/subscriptions",
"organizations_url": "https://api.github.com/users/felipeoliverai/orgs",
"repos_url": "https://api.github.com/users/felipeoliverai/repos",
"events_url": "https://api.github.com/users/felipeoliverai/events{/privacy}",
"received_events_url": "https://api.github.com/users/felipeoliverai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### System Info
I'm doing the Deep RL course, I don't know what is happening, something is wrong in the notebook on unit 1: https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit1/unit1.ipynb#scrollTo=xMkkkukIBQJM
when I try to push my Agent to HF Hub I received this error message: AttributeError: 'DummyVecEnv' object has no attribute 'render_mode'
my code is very similar to the notebook example.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code example:
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
env_id = "LunarLander-v2"
model_architecture = "PPO"
repo_id = "Felipe474/ppo-LunarLander-v2" # Change with your repo id, you can't push with mine 😄
commit_message = "Upload PPO LunarLander-v2 trained agent"
eval_env = DummyVecEnv([lambda: gym.make(env_id)])
package_to_hub(model=model, # Our trained model
model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env, # Evaluation Environment
repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
commit_message=commit_message)
### Expected behavior
Successively push agent to HF hub.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20914/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20913/events
|
https://github.com/huggingface/transformers/pull/20913
| 1,512,052,138
|
PR_kwDOCUB6oc5GQpxM
| 20,913
|
Fix FP16 inference in TextGenerationPipeline
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"In general its better for parameters to be in `_sanitize_parameters` but this doesn't apply here, since the models already uses `torch_dtype` and so using `pipe(..., torch_dtype=torch.float16)` cannot work anyway.\r\n\r\n I think the proposed fix is elegant.\r\n\r\nDo you mind adding a test for `text-generation` and `float16` too ? \r\n\r\nFor the quality you should be able to do\r\n\r\n```\r\npip install -e .[dev] # To get dependencies\r\nmake fixup\r\n```",
"Thanks for the review @Narsil !\r\n\r\nDo you think it should be better to change the name `torch_dtype` to `dtype`, since `Pipeline` can also be used for Tensorflow? I don't really use Tensorflow so I'm not sure about it.\r\n\r\nJust add a test and fix the quality. Thanks for the tips!\r\n",
"> Do you think it should be better to change the name torch_dtype to dtype, since Pipeline can also be used for Tensorflow? I don't really use Tensorflow so I'm not sure about it.\r\n\r\nLater if we do it. Better stick to the named used elsewhere in the lib which is indeed `torch_dtype`. I'm also unfamiliar with fp16 computation in Tensorflow, but I'm guessing it could work differently.\r\n\r\nThe good thing is that we could always alias later if needed."
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hi @Narsil,
I tried to fix https://github.com/huggingface/transformers/issues/20912 here by settiing `torch_dtype` as a regular attribute to help with the `preprocess` function in `AutomaticSpeechRecognitionPipeline`. This way we keep it out of `kwargs`, so don't need to modify the `_sanitize_parameters` function in other pipelines. Looking forward to hearing your opinion :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20913/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20913",
"html_url": "https://github.com/huggingface/transformers/pull/20913",
"diff_url": "https://github.com/huggingface/transformers/pull/20913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20913.patch",
"merged_at": 1672298365000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20912/events
|
https://github.com/huggingface/transformers/issues/20912
| 1,512,047,795
|
I_kwDOCUB6oc5aIASz
| 20,912
|
Run TextGenerationPipeline in FP16
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
Hi @Narsil,
I just found some other pipelines (e.g., `TextGenerationPipeline`, `Text2TextGenerationPipeline`) can't run fp16 inference any more due to the change in this PR https://github.com/huggingface/transformers/pull/20864.
In fact, the added `torch_dtype` attribute will be unexpectedly thrown into `forward_params` by `_sanitize_parameters()`, then raises an error in `generate()` function.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Below is a code snippet to reproduce the behavior.
```python
import torch
from transformers import pipeline
generator = pipeline(model="gpt2", device=0, torch_dtype=torch.float16)
generator("I can't believe you did such a ")
```
When running this we see the following stack trace:
```
╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ <ipython-input-1-f80bcf5e17e1>:6 in <module> │
│ /home/bhuang/transformers/src/transformers/pipelines/text_generation.py:210 in __call__ │
│ │
│ 207 │ │ │ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when │
│ 208 │ │ │ ids of the generated text. │
│ 209 │ │ """ │
│ ❱ 210 │ │ return super().__call__(text_inputs, **kwargs) │
│ 211 │ │
│ 212 │ def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **gen │
│ 213 │ │ inputs = self.tokenizer( │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:1074 in __call__ │
│ │
│ 1071 │ │ elif is_iterable: │
│ 1072 │ │ │ return self.iterate(inputs, preprocess_params, forward_params, postpro │
│ 1073 │ │ else: │
│ ❱ 1074 │ │ │ return self.run_single(inputs, preprocess_params, forward_params, post │
│ 1075 │ │
│ 1076 │ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_par │
│ 1077 │ │ return [self.run_single(item, preprocess_params, forward_params, postproce │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:1081 in run_single │
│ │
│ 1078 │ │
│ 1079 │ def run_single(self, inputs, preprocess_params, forward_params, postprocess_pa │
│ 1080 │ │ model_inputs = self.preprocess(inputs, **preprocess_params) │
│ ❱ 1081 │ │ model_outputs = self.forward(model_inputs, **forward_params) │
│ 1082 │ │ outputs = self.postprocess(model_outputs, **postprocess_params) │
│ 1083 │ │ return outputs │
│ 1084 │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/base.py:990 in forward │
│ │
│ 987 │ │ │ │ inference_context = self.get_inference_context() │
│ 988 │ │ │ │ with inference_context(): │
│ 989 │ │ │ │ │ model_inputs = self._ensure_tensor_on_device(model_inputs, dev │
│ ❱ 990 │ │ │ │ │ model_outputs = self._forward(model_inputs, **forward_params) │
│ 991 │ │ │ │ │ model_outputs = self._ensure_tensor_on_device(model_outputs, d │
│ 992 │ │ │ else: │
│ 993 │ │ │ │ raise ValueError(f"Framework {self.framework} is not supported") │
│ │
│ /home/bhuang/transformers/src/transformers/pipelines/text_generation.py:252 in _forward │
│ │
│ 249 │ │ │ in_b = input_ids.shape[0] │
│ 250 │ │ prompt_text = model_inputs.pop("prompt_text") │
│ 251 │ │ # BS x SL │
│ ❱ 252 │ │ generated_sequence = self.model.generate(input_ids=input_ids, attention_mas │
│ 253 │ │ out_b = generated_sequence.shape[0] │
│ 254 │ │ if self.framework == "pt": │
│ 255 │ │ │ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *g │
│ │
│ /home/bhuang/anaconda3/envs/asr/lib/python3.8/site-packages/torch/autograd/grad_mode.py:2 │
│ 7 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/bhuang/transformers/src/transformers/generation/utils.py:1145 in generate │
│ │
│ 1142 │ │ │
│ 1143 │ │ generation_config = copy.deepcopy(generation_config) │
│ 1144 │ │ model_kwargs = generation_config.update(**kwargs) # All unused kwargs mus │
│ ❱ 1145 │ │ self._validate_model_kwargs(model_kwargs.copy()) │
│ 1146 │ │ │
│ 1147 │ │ # 2. Set generation parameters if not already defined │
│ 1148 │ │ logits_processor = logits_processor if logits_processor is not None else L │
│ │
│ /home/bhuang/transformers/src/transformers/generation/utils.py:973 in │
│ _validate_model_kwargs │
│ │
│ 970 │ │ │ │ unused_model_args.append(key) │
│ 971 │ │ │
│ 972 │ │ if unused_model_args: │
│ ❱ 973 │ │ │ raise ValueError( │
│ 974 │ │ │ │ f"The following `model_kwargs` are not used by the model: {unused_ │
│ 975 │ │ │ │ " generate arguments will also show up in this list)" │
│ 976 │ │ │ ) │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: The following `model_kwargs` are not used by the model: ['torch_dtype'] (note:
typos in the generate arguments will also show up in this list)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20912/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20911/events
|
https://github.com/huggingface/transformers/pull/20911
| 1,511,857,959
|
PR_kwDOCUB6oc5GP_Wf
| 20,911
|
Generate: correctly detect default max length
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,684
| 1,672
|
MEMBER
| null |
# What does this PR do?
Fixes #20894.
Now that we are using the generation config, we can detect the use of a default `max_length` and a potential clash with `max_new_tokens` with values other than `max_length=20` :)
After this change, the example in the issue linked above works correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20911/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20911/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20911",
"html_url": "https://github.com/huggingface/transformers/pull/20911",
"diff_url": "https://github.com/huggingface/transformers/pull/20911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20911.patch",
"merged_at": 1672221926000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20910/events
|
https://github.com/huggingface/transformers/issues/20910
| 1,511,825,042
|
I_kwDOCUB6oc5aHJ6S
| 20,910
|
Request for scripts/helper function to create custom jsonl files for translation
|
{
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Examples are just that examples. You should adapt the data processing part to your specific data format."
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### Feature request
In the examples for machine translation there's a section that states that current scripts can only consume data in a custom jsonl format as follows
```json
{ "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } }
{ "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } }
```
It would be great if there are helper scripts that could convert pandas data frames into this particular format
### Motivation
Its frustrating data has to be converted into the particular format for it to be consumed by the training scripts, it would be nice if the scripts consumed CSV's with one column being language1 and the other column being language2.
### Your contribution
I could help testing and validating the code
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20910/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20909/events
|
https://github.com/huggingface/transformers/pull/20909
| 1,511,520,670
|
PR_kwDOCUB6oc5GO2ZP
| 20,909
|
Add distributed training example with Accelerate for run_clm_no_trainer.py
|
{
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20909). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
# What does this PR do?
Adds additional documentation for running distributed training using accelerate for training causal language models without the HuggingFace Trainer. The example that uses Trainer API defaults to using multi GPU for training while the no trainer example defaults to single GPU training. The added documentation clears the confusion up and provides an example for distributed training when not using the Trainer API
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20909/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20909",
"html_url": "https://github.com/huggingface/transformers/pull/20909",
"diff_url": "https://github.com/huggingface/transformers/pull/20909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20909.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20908/events
|
https://github.com/huggingface/transformers/issues/20908
| 1,511,515,348
|
I_kwDOCUB6oc5aF-TU
| 20,908
|
OSError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
|
{
"login": "ucas010",
"id": 50656998,
"node_id": "MDQ6VXNlcjUwNjU2OTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/50656998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ucas010",
"html_url": "https://github.com/ucas010",
"followers_url": "https://api.github.com/users/ucas010/followers",
"following_url": "https://api.github.com/users/ucas010/following{/other_user}",
"gists_url": "https://api.github.com/users/ucas010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ucas010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ucas010/subscriptions",
"organizations_url": "https://api.github.com/users/ucas010/orgs",
"repos_url": "https://api.github.com/users/ucas010/repos",
"events_url": "https://api.github.com/users/ucas010/events{/privacy}",
"received_events_url": "https://api.github.com/users/ucas010/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to make sure to execute the cell `notebook_login()` at the beginning and pass it your token (it provides a direct link to your token pages on hf.co)",
"pass it your token ??? I have token but how to use it ? @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### System Info
copy the [RP](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) and got the ERROR
could you pls help me ?
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
same as official script
### Expected behavior
overcome the error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20908/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20907/events
|
https://github.com/huggingface/transformers/pull/20907
| 1,511,485,573
|
PR_kwDOCUB6oc5GOu6g
| 20,907
|
Extend Script to enable conversion of Encoder Only T5x Models to Pytorch
|
{
"login": "ToluClassics",
"id": 38908008,
"node_id": "MDQ6VXNlcjM4OTA4MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/38908008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToluClassics",
"html_url": "https://github.com/ToluClassics",
"followers_url": "https://api.github.com/users/ToluClassics/followers",
"following_url": "https://api.github.com/users/ToluClassics/following{/other_user}",
"gists_url": "https://api.github.com/users/ToluClassics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToluClassics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToluClassics/subscriptions",
"organizations_url": "https://api.github.com/users/ToluClassics/orgs",
"repos_url": "https://api.github.com/users/ToluClassics/repos",
"events_url": "https://api.github.com/users/ToluClassics/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToluClassics/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @ArthurZucker ",
"Once the tests are good we can merge!",
"Hi @ArthurZucker , `make style` seems to be changing a lot of files, is there any fix for this?",
"Yes, you probably have the wrong version of `black`. Something like `pip install --upgrade black` should fix this. ",
"Fixed🤓"
] | 1,672
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR Extends the [Script that converts T5x Models to Pytorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py). This is particularly useful for converting [T5x Retrieval Dual Encoder](https://github.com/google-research/t5x_retrieval) Models to Pytorch.
To Use:
- In case you don't have gsutil, install according to https://cloud.google.com/storage/docs/gsutil_install
- Pretrained T5X_Retrieval checkpoints are at https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5x/retrieval/. Example:
gsutil -m cp -r : [gs://t5-data/pretrained_models/t5x/retrieval/gtr_base](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5x/retrieval/gtr_base/) $HOME/
- Create a corresponding config.json for the downloaded checkpoint. Often one already exists, e.g. here we can use https://huggingface.co/google/t5-v1_1-base/blob/main/config.json
I tested this on the GTR-base released checkpoint and compared the Jax and Pytorch checkpoints and the outputs are similar,
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@patil-suraj
@bastings
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20907/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20907/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20907",
"html_url": "https://github.com/huggingface/transformers/pull/20907",
"diff_url": "https://github.com/huggingface/transformers/pull/20907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20907.patch",
"merged_at": 1674481303000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20906/events
|
https://github.com/huggingface/transformers/pull/20906
| 1,511,381,110
|
PR_kwDOCUB6oc5GOZCC
| 20,906
|
add model resources for CPMAnt (new)
|
{
"login": "pioliverse",
"id": 119836898,
"node_id": "U_kgDOBySQ4g",
"avatar_url": "https://avatars.githubusercontent.com/u/119836898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pioliverse",
"html_url": "https://github.com/pioliverse",
"followers_url": "https://api.github.com/users/pioliverse/followers",
"following_url": "https://api.github.com/users/pioliverse/following{/other_user}",
"gists_url": "https://api.github.com/users/pioliverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pioliverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pioliverse/subscriptions",
"organizations_url": "https://api.github.com/users/pioliverse/orgs",
"repos_url": "https://api.github.com/users/pioliverse/repos",
"events_url": "https://api.github.com/users/pioliverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/pioliverse/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Thanks very much @pioliverse for iterating! I left a couple of comments, I think that some refactoring needs to be considered, after that we should be close to merge this! My main comments are:\r\n> \r\n> * I think that you can wrap `CPMAntEmbedding` around a `nn.Embedding` layer even though scaling is needed. You can scale down after each call to the embedding module and make sure the input is scaled down before the `projection` call.\r\n> * Make sure to inherit `CPMAntForCausalLM` from `CPMAntPreTrainedModel`, also make sure to follow the convention / good practices by checking what is done in OPT for instance: https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/src/transformers/models/opt/modeling_opt.py#L808\r\n> - this includes defining correctly a `lm_head` module, functions such as `get_input_embeddings`, `set_input_embeddings`, etc.\r\n> * A lot of arguments from module's init seems to be unused, e.g. `init_std`. Try also to take the `config` object as a single argument from the init whenever possible (e.g. `CPMAntEncoder`)\r\n> * Please make sure to follow the correct styling for docstrings (check my comments about that below)\r\n> * If you have to initialize some weights with a specific distribution, try to initialize all the submodules weights inside `_init_weights` function from `CPMAntPreTrainedModel`\r\n> * It's unclear to me why `forward` function is not defined in `CPMAntForCausalLM`\r\n> * The code can be optimized here and there, I left some comments below on how you can achieve that\r\n> * Please do not raise `RuntimeErrors` outside `if torch_is_available()`, otherwise `flax` & `tf` tests will fail\r\n> Again thanks a lot for your efforts!\r\n\r\n@younesbelkada Thanks for your patience in reviewing, I followed OPT convention and made the following changes:\r\n> * `CPMAntEmbedding` and `CPMAntLinear` has been replaced by `nn.Embedding` and `nn.Linear` respectively.\r\n> * `CPMAntForCausalLM` has been inherited from `CPMAntPreTrainedModel`, and `lm_head` and some functions have been added.\r\n> * Useless initial arguments have been removed.\r\n> * `forward` has been defined in `CPMAntForCausalLM`",
"@younesbelkada Thanks again for your patience in reviewing.",
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks so much for your patience ! Looks pretty clean thank you! We should be close merging this once most of the comments are addressed. My comments being:\r\n> \r\n> ### Docstring and comments:\r\n> * please harmonize the function docstrings to match the convention `transformers` model follow\r\n> * please make sure to clean up some comments\r\n> * Also would be nice to add a small explanation on the code on why `generate` needs to be overriden\r\n> \r\n> ### `dtype`:\r\n> * I don't think the argument `dtype` is needed. The dtype of the whole model is managed by the kwarg `torch_dtype` so you can load your model using `model = xxxForCausalLM.from_pretrained(xxx, torch_dtype=torch.float16)` or `torch_dtype=\"auto\"` (if the weights are pushed in fp16) and the model will be loaded in the desired precision.\r\n> \r\n> ### tests\r\n> * I think that a test is failing, please double check that\r\n> \r\n> ### general comments\r\n> * For classes that are public (i.e. that are ported in `__init__.py`, basically`CPMAntModel` &. `CPMAntForCausalLM` it is preferable to adopt this logic: https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py#L693-L703 --> outputa tuple if not `return_dict` otherwise return a dataclass. Please check other modeling files as reference\r\n> * you can wrap `attention_mask` creation process inside class methods, e.g. `_prepare_attention_mask(`\r\n> \r\n> Thanks!\r\n\r\n\r\n\r\nHi @younesbelkada, we have made some changes as follows:\r\n1. add some docstrings.\r\n2. modified `forward` following the style of transformers.\r\n3. rewrote some functions to adapt the `generate` function\r\n>* in `modeling_cpmant.py`, we rewrote some functions like `prepare_inputs_for_generation`, `_expand_inputs_for_generation`\r\n>* in `tokenization_cpmant.py`, rewrote some functions like `prepare_for_model`, `_pad`, `_encode_plus`, `_batch_encode_plus`\r\n4. cleaned some comments.\r\n",
"I am a bit surprised that when I use `make style`, some other files are also reformatted, which causes `check_code_quality` to fail.",
"Hi @pioliverse \r\nYou need to rebase with `main` branch as the styling has been updated for most of the files in `transformers` , and update your black version as follows:\r\n```\r\npip install --upgrade -e .[\"quality\"]\r\n```\r\nThen `make style` or `make fixup`",
"> Hi @pioliverse You need to rebase with `main` branch as the styling has been updated for most of the files in `transformers` , and update your black version as follows:\r\n> \r\n> ```\r\n> pip install --upgrade -e .[\"quality\"]\r\n> ```\r\n> \r\n> Then `make style` or `make fixup`\r\n\r\nThanks @younesbelkada , this has been solved.",
"> Thanks a lot for addressing most of the comments of the previous review! And thank you for your huge work on refactoring the modeling script I left some comments, mostly nits that can be solved easily. Note that for arguments such as `use_cache` etc, we prefer to pass them through the forward pass rather than setting them as a class attribute. Also, please consider passing a `CPMAntConfig` for the classes that have several attributes such as `CPMAntEncoder` Make sure also to correctly pass the required keyword arguments such as `past_key_values`, `output_attentions` etc, that are crucial for caching mechanism. You can check how this is done in OPT for example Finally, the naming convention in `transformers` has changed a bit, we prefer to name models with a single capital letter (i.e. here `CPMAnt -> Cpmant`) Again thanks for your efforts on this! Once the comments being solved, we should be very close merging this!\r\n\r\n\r\n\r\nThanks for your review @younesbelkada , we have modified some code.\r\n>* We pass the `use_cache` in `forward` function from a class attribute.\r\n>* We simplify the code for the class attribute assignment and replace it with `CPMAntconfig`.\r\n>* We added `past_key_values` and `output_attentions` in `forward` of CPMAntModel.\r\n>* I kind of wonder if all files that contain the name `CPMAnt` should be changed to `Cpmant`?",
"Hi @younesbelkada , I am a member of OpenBMB, and I will help @pioliverse finish this PR.\r\n\r\nAll the issues mentioned above have been resolved. Please kindly have a look.\r\n\r\nFor the unit tests, I rebase `pioliverse:cpmantmodel` with `huggingface:main`, but it cannot pass the test. It seems some other models cause the failure?\r\n\r\nfor instance, in tests_onnx I met the error:\r\n`\r\nERROR tests/models/altclip/test_modeling_altclip.py\r\n============ 72 passed, 551 skipped, 29 warnings, 1 error in 28.26s ============\r\n`\r\nHow can I avoid such error?",
"Hi @gongbaitao \r\nThanks for jumping in! And sorry for the delay\r\nRebasing with `main` should be probably solve this issue, will look into the PR asap, let me know once you think this is ready for review!",
"Hi @younesbelkada, thanks for your reply and advice!\r\nAll the problems in unit test have beed solved. In the latest commit, we largely refactor the code to make it clear and simple. Hope you can give a review soon! ",
"Thanks a lot for the heads-up @pioliverse @gongbaitao !\r\nQuickly looking at the README_ja file it seems that a some unnecessarly changes were made, I suggest you merge this branch with the upstream `transformers` `main` branch:\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit merge upstream/main\r\ngit push\r\n```\r\n\r\nI'll have a closer look on the other files asap! ",
"> Thanks a lot for the heads-up @pioliverse @gongbaitao ! Quickly looking at the README_ja file it seems that a some unnecessarly changes were made, I suggest you merge this branch with the upstream `transformers` `main` branch:\r\n> \r\n> ```\r\n> git remote add upstream https://github.com/huggingface/transformers.git\r\n> git fetch upstream\r\n> git merge upstream/main\r\n> git push\r\n> ```\r\n> \r\n> I'll have a closer look on the other files asap!\r\n\r\nThanks for the tips @younesbelkada!\r\n I have merged with `huggingface main` branch and update the README_ja, and it's up-to-date with the `main` now. We hope to merge this PR in this week, please kindly have a look. Thanks!",
"> Regarding slow integration tests, we might need to totally skip them as the weights are 40GB in fp32 (and 20GB in fp16), I think our daily CI runners have 16GB GPU VRAM, so friendly pinging here @ydshieh to see what could be the alternative.\r\n\r\nYeah, just skip it/them.",
"Hi @gongbaitao \r\nGreat work on refactoring the code and making the CI tests pass! 🎉 \r\nLet me know once this is ready for review!",
"Hi @younesbelkada, thanks for your comment yesterday! This helps me a lot to find hidden errors in unit test part. I have fixed these bugs and make the new commit.\r\nBut I don't know why ci/cicleci: test_tf always failed with `tests/models/opt/test_modeling_tf_opt.py::TFOPTModelTest::test_pipeline_text_generation` time out, even I have tried 3 times.\r\nIs there anything I missed?",
"Hello @gongbaitao \r\nDon't worry I think this is fine. If you give me the green light, I can review the PR now",
"> Hello @gongbaitao Don't worry I think this is fine. If you give me the green light, I can review the PR now\r\n\r\nYeah it's solved by luck i guess.\r\nI think It's ready for review, thanks for your help again! @younesbelkada ",
"Hi @younesbelkada, thanks for your meaningful comments!\r\n\r\n1. The link for checkpoint, and some other comments problems, have been corrected.\r\n\r\n2. As for the name, because till now CPMAnt has just one Chinese case, I guess it's no need to call it like CPMAntChinese. \r\n\r\n3. Besides, I have made some local tests on tokenizers, for example, the `CPMAntTokenizationTest.test_pre_tokenization()`. But some methods in `TokenizerTesterMixin` use different logic to load vocab as a `dict`, while CPMAnt has it's own `load_vocab`. Refactor is not that convenient and necessary i think, so I just set it as `custom`. I can make a change if there's any better solutions.\r\n\r\nIt's ok for the new review : )",
"Hi @younesbelkada . I have fixed the problems mentioned in comments. I think it's ready for new review:)\r\nThanks for your detailed review and comments!",
"@younesbelkada @sgugger Thanks for the valued comments!\r\nAccording to the new comments, I have dropped some redundant codes, and rename the model class in a camel-cased way\r\n: )",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @sgugger @younesbelkada , sorry for the delay!\r\nIn the last few weeks, I have fixed the problems mentioned above and refactored the CPMAnt tokenizer. Please kindly have a look again, thanks for your help!",
"Thanks for your quick review! @sgugger \r\nIt seems this problem https://github.com/huggingface/transformers/pull/20906#discussion_r1161687724 is because the changed file didn't show all commits. Maybe check this page https://github.com/huggingface/transformers/pull/20906/files will be helpful:)\r\nAs the https://github.com/huggingface/transformers/pull/20906#discussion_r1161688147, it cannot pass the code quality check, so shall I keep it unchanged?",
"@sgugger Thanks for your meaningful comments!\r\nSorry I forget to drop the trailing comma in styling issue. Now I have fixed the trailing comma problem and add `tooslow` decorator. Please kindly have a review:)"
] | 1,672
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Since the previous submission(#20711 ) had problems here and there, we have now resubmitted a new one.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20906/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20906",
"html_url": "https://github.com/huggingface/transformers/pull/20906",
"diff_url": "https://github.com/huggingface/transformers/pull/20906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20906.patch",
"merged_at": 1681299200000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20905/events
|
https://github.com/huggingface/transformers/issues/20905
| 1,511,267,212
|
I_kwDOCUB6oc5aFBuM
| 20,905
|
Issues attempting to implement P-TuningV2 with huggingface's BART
|
{
"login": "maxrousseau",
"id": 16603191,
"node_id": "MDQ6VXNlcjE2NjAzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/16603191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxrousseau",
"html_url": "https://github.com/maxrousseau",
"followers_url": "https://api.github.com/users/maxrousseau/followers",
"following_url": "https://api.github.com/users/maxrousseau/following{/other_user}",
"gists_url": "https://api.github.com/users/maxrousseau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxrousseau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxrousseau/subscriptions",
"organizations_url": "https://api.github.com/users/maxrousseau/orgs",
"repos_url": "https://api.github.com/users/maxrousseau/repos",
"events_url": "https://api.github.com/users/maxrousseau/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxrousseau/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hello @patrickvonplaten @ArthurZucker,\r\n\r\nI wrote a simple test case to reproduce the error I am getting for the model I am trying to implement using a few examples from SQuAD.\r\n\r\n### 1. Loading the dataset\r\n\r\n```\r\nfrom datasets import Dataset\r\n\r\ndef formatToMI(dataset):\r\n \"\"\"take a squad-like qa dataset and transform into MLM format\"\"\"\r\n masked_strings = []\r\n full_strings = []\r\n qa_strings = []\r\n answer_strings = []\r\n\r\n for i in range(len(dataset[\"question\"])):\r\n question = dataset[\"question\"][i]\r\n answer = dataset[\"answers\"][i][\"text\"][0]\r\n context = dataset[\"context\"][i]\r\n\r\n masked_strings.append(\r\n \"Question: {} Answer: <mask>. Context: {}\".format(question, context)\r\n )\r\n full_strings.append(\r\n \"Question: {} Answer: {}. Context: {}\".format(question, answer, context)\r\n )\r\n qa_strings.append(\"Question: {} Answer: {}.\".format(question, answer))\r\n answer_strings.append(answer)\r\n\r\n return {\r\n \"masked_strings\": masked_strings,\r\n \"full_strings\": full_strings,\r\n \"qa_strings\": qa_strings,\r\n \"answer_strings\": answer_strings,\r\n \"id\": dataset[\"id\"],\r\n }\r\n\r\n\r\ndef loadSquadMI(n=None):\r\n \"\"\"create a dataloader for SQuAD\"\"\"\r\n from datasets import load_dataset\r\n raw_datasets = load_dataset(\"squad\")\r\n\r\n if n is not None:\r\n squad_subset = formatToMI(raw_datasets[\"train\"][:n])\r\n return squad_subset\r\n else:\r\n return 0\r\n\r\n\r\nsamples = loadSquadMI(n=100)\r\ntiny_squad = Dataset.from_dict(samples)\r\n```\r\n### 2. Creating the dataloader\r\n\r\n```\r\nfrom transformers import AutoTokenizer, BartForConditionalGeneration, DataCollatorForSeq2Seq\r\nimport torch\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\n# initialize BART and PrefixBART for MI\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-base\")\r\nexamples = tiny_squad\r\nprefixbart_model = PrefixBartForConditionalGeneration.from_pretrained(\"facebook/bart-base\")\r\nbart_model = BartForConditionalGeneration.from_pretrained(\"facebook/bart-base\")\r\n\r\ndata_collator = DataCollatorForSeq2Seq(\r\n tokenizer,\r\n model=prefixbart_model,\r\n label_pad_token_id=-100,\r\n pad_to_multiple_of=8,\r\n)\r\n\r\n# preprocessing\r\ndef training_preprocessing(examples):\r\n \"\"\"examples have all three types of string\"\"\"\r\n padding = \"max_length\"\r\n model_inputs = tokenizer(\r\n examples[\"masked_strings\"],\r\n max_length=384,\r\n padding=padding,\r\n truncation=False,\r\n )\r\n labels = tokenizer(\r\n text_target=examples[\"qa_strings\"],\r\n max_length=128,\r\n padding=padding,\r\n truncation=True,\r\n )\r\n # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\r\n # padding in the loss.\r\n if padding == \"max_length\":\r\n labels[\"input_ids\"] = [\r\n [(l if l != tokenizer.pad_token_id else -100) for l in label]\r\n for label in labels[\"input_ids\"]\r\n ]\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\nproc_train_dataset = examples.map(\r\n training_preprocessing,\r\n batched=True,\r\n remove_columns=examples.column_names,\r\n)\r\n\r\ntrain_tensor = proc_train_dataset\r\ntrain_tensor.set_format(\"torch\")\r\n\r\ntrain_dataloader = DataLoader(\r\n train_tensor,\r\n shuffle=True,\r\n collate_fn=data_collator,\r\n batch_size=4,\r\n num_workers=0,\r\n)\r\n```\r\n\r\n### 3. Test: a single forward pass\r\n\r\n#### With BART : successful \r\n```\r\nbart_model.train()\r\nbatch = next(iter(train_dataloader))\r\noutputs = bart_model(**batch)\r\nloss = outputs.loss\r\nprint(loss)\r\n```\r\n**Output:**\r\n`tensor(0.8271, grad_fn=<NllLossBackward0>)`\r\n\r\n#### With PrefixBART : failure (same error as above)\r\n```\r\nprefixbart_model.train()\r\nbatch = next(iter(train_dataloader))\r\noutputs = prefixbart_model(**batch)\r\nloss = outputs.loss\r\nprint(loss)\r\n```\r\n**Output**\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-26-ebc93e8e099a>](https://localhost:8080/#) in <module>\r\n 3 prefixbart_model.train()\r\n 4 batch = next(iter(train_dataloader))\r\n----> 5 outputs = prefixbart_model(**batch)\r\n 6 loss = outputs.loss\r\n 7 print(loss)\r\n\r\n9 frames\r\n[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)\r\n 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1189 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1190 return forward_call(*input, **kwargs)\r\n 1191 # Do not call functions when jit is used\r\n 1192 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n[<ipython-input-5-71e56dfc61a6>](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 211 )\r\n 212 \r\n--> 213 outputs = self.model(\r\n 214 input_ids,\r\n 215 attention_mask=attention_mask,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)\r\n 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1189 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1190 return forward_call(*input, **kwargs)\r\n 1191 # Do not call functions when jit is used\r\n 1192 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1231 \r\n 1232 if encoder_outputs is None:\r\n-> 1233 encoder_outputs = self.encoder(\r\n 1234 input_ids=input_ids,\r\n 1235 attention_mask=attention_mask,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)\r\n 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1189 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1190 return forward_call(*input, **kwargs)\r\n 1191 # Do not call functions when jit is used\r\n 1192 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\r\n 848 )\r\n 849 else:\r\n--> 850 layer_outputs = encoder_layer(\r\n 851 hidden_states,\r\n 852 attention_mask,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)\r\n 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1189 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1190 return forward_call(*input, **kwargs)\r\n 1191 # Do not call functions when jit is used\r\n 1192 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions)\r\n 323 \"\"\"\r\n 324 residual = hidden_states\r\n--> 325 hidden_states, attn_weights, _ = self.self_attn(\r\n 326 hidden_states=hidden_states,\r\n 327 attention_mask=attention_mask,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)\r\n 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1189 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1190 return forward_call(*input, **kwargs)\r\n 1191 # Do not call functions when jit is used\r\n 1192 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)\r\n 238 if attention_mask is not None:\r\n 239 if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n--> 240 raise ValueError(\r\n 241 f\"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}\"\r\n 242 )\r\n\r\nValueError: Attention mask should be of size (4, 1, 384, 384), but is torch.Size([4, 1, 388, 388])\r\n```",
"Hello again @patrickvonplaten @ArthurZucker,\r\n\r\nI just found out about `adapter-transformers` which implements prefix-tuning for BART on which P-TuningV2 is based. Maybe this issue can be closed?",
"Hey! Cool that you found something that works for you! The issue might just have been from a config parameter defining the `hidden_size`",
"Hello, thank you for replying. I will try out the modified config and see if it resolves the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,676
| 1,676
|
NONE
| null |
@patrickvonplaten
Hello, I am trying to implement P-Tuningv2 with BART using huggingface's transformers v4.25.1 ([P-TuningV2 official repo](https://github.com/THUDM/P-tuning-v2)). However, when I try to train the model I get the following error:
```
[/usr/local/lib/python3.8/dist-packages/transformers/models/bart/modeling_bart.py](https://localhost:8080/#) in forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions)
238 if attention_mask is not None:
239 if attention_mask.size() != (bsz, 1, tgt_len, src_len):
--> 240 raise ValueError(
241 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
242 )
ValueError: Attention mask should be of size (4, 1, 648, 648), but is torch.Size([4, 1, 652, 652])
```
Any ideas where the issue is coming from or how to resolve this? I am a little unfamiliar with the codebase so any help will be greatly appreciated.
Thanks,
Here's the code I'm using to run the model:
```
import torch
from torch import nn
from transformers import BartPretrainedModel, BartConfig, BartModel
import copy
import math
import random
import warnings
import torch
import torch.utils.checkpoint
from torch import nn
from torch.nn import CrossEntropyLoss
from transformers.modeling_outputs import Seq2SeqLMOutput
def shift_tokens_right(
input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int
):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
return shifted_input_ids
class PrefixEncoder(torch.nn.Module):
r"""
The torch.nn model to encode the prefix
Input shape: (batch-size, prefix-length)
Output shape: (batch-size, prefix-length, 2*layers*hidden)
"""
def __init__(self, config):
super().__init__()
self.prefix_projection = config.prefix_projection
if self.prefix_projection:
# Use a two-layer MLP to encode the prefix
self.embedding = torch.nn.Embedding(config.pre_seq_len, config.hidden_size)
self.trans = torch.nn.Sequential(
torch.nn.Linear(config.hidden_size, config.prefix_hidden_size),
torch.nn.Tanh(),
torch.nn.Linear(
config.prefix_hidden_size,
config.num_hidden_layers * 2 * config.hidden_size,
),
)
else:
self.embedding = torch.nn.Embedding(
config.pre_seq_len, config.num_hidden_layers * 2 * config.hidden_size
)
def forward(self, prefix: torch.Tensor):
if self.prefix_projection:
prefix_tokens = self.embedding(prefix)
past_key_values = self.trans(prefix_tokens)
else:
past_key_values = self.embedding(prefix)
return past_key_values
class PrefixBartForConditionalGeneration(BartPretrainedModel):
base_model_prefix = "model"
_keys_to_ignore_on_load_missing = [
r"final_logits_bias",
r"lm_head.weight",
"encoder.embed_tokens.weight",
"decoder.embed_tokens.weight",
]
def __init__(self, config: BartConfig):
# MAX - testing the config default values from (https://github.com/THUDM/P-tuning-v2/blob/main/arguments.py)
config.pre_seq_len = 4
config.hidden_dropout_prob = 0.1
config.prefix_hidden_size = 512
config.prefix_projection = False
super().__init__(config)
# MAX :: get the layer, embedding and heads to generate the prefix
self.pre_seq_len = config.pre_seq_len
self.n_layer = config.num_hidden_layers
self.n_head = config.num_attention_heads
self.n_embd = (
config.hidden_size // config.num_attention_heads
) # MAX - here we change the embed dims..
self.model = BartModel(config)
self.register_buffer(
"final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))
)
self.lm_head = nn.Linear(
config.d_model, self.model.shared.num_embeddings, bias=False
)
# MAX :: add the prefix encoder/tokens and dropout for the prefixes
self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)
self.prefix_encoder = PrefixEncoder(config)
self.prefix_tokens = torch.arange(self.pre_seq_len).long()
# MAX :: freeze the model parameters
for param in self.model.parameters():
param.requires_grad = False
# Initialize weights and apply final processing
self.post_init()
# MAX :: modify and adapt for bart
def get_prompt(self, batch_size):
prefix_tokens = (
self.prefix_tokens.unsqueeze(0).expand(batch_size, -1).to(self.model.device)
)
past_key_values = self.prefix_encoder(prefix_tokens)
bsz, seqlen, _ = past_key_values.shape
past_key_values = past_key_values.view(
bsz, seqlen, self.n_layer * 2, self.n_head, self.n_embd
)
past_key_values = self.dropout(past_key_values)
past_key_values = past_key_values.permute([2, 0, 3, 1, 4]).split(2)
return past_key_values
def get_encoder(self):
return self.model.get_encoder()
def get_decoder(self):
return self.model.get_decoder()
def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding:
new_embeddings = super().resize_token_embeddings(new_num_tokens)
self._resize_final_logits_bias(new_num_tokens)
return new_embeddings
def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
old_num_tokens = self.final_logits_bias.shape[-1]
if new_num_tokens <= old_num_tokens:
new_bias = self.final_logits_bias[:, :new_num_tokens]
else:
extra_bias = torch.zeros(
(1, new_num_tokens - old_num_tokens),
device=self.final_logits_bias.device,
)
new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1)
self.register_buffer("final_logits_bias", new_bias)
def get_output_embeddings(self):
return self.lm_head
def set_output_embeddings(self, new_embeddings):
self.lm_head = new_embeddings
def forward(
self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
Returns:
"""
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
# MAX-NOTE :: run the prefix layer
batch_size = input_ids.shape[0]
past_key_values = self.get_prompt(batch_size=batch_size)
prefix_attention_mask = torch.ones(batch_size, self.pre_seq_len).to(
self.model.device
)
attention_mask = torch.cat((prefix_attention_mask, attention_mask), dim=1)
print("encoder mask: {}".format(attention_mask.size()))
# BUG attention_mask is changed but no the size of the hidden_states and and the key_states (past_key_value[0])?
if labels is not None:
if use_cache:
logger.warning(
"The `use_cache` argument is changed to `False` since `labels` is provided."
)
use_cache = False
if decoder_input_ids is None and decoder_inputs_embeds is None:
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
outputs = self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
encoder_outputs=encoder_outputs,
decoder_attention_mask=decoder_attention_mask,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values, # MAX-NOTE :: unlike bert this did not need to be added here?
inputs_embeds=inputs_embeds,
decoder_inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
lm_logits = self.lm_head(outputs[0])
lm_logits = lm_logits + self.final_logits_bias.to(lm_logits.device)
masked_lm_loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(
lm_logits.view(-1, self.config.vocab_size), labels.view(-1)
)
if not return_dict:
output = (lm_logits,) + outputs[1:]
return (
((masked_lm_loss,) + output) if masked_lm_loss is not None else output
)
return Seq2SeqLMOutput(
loss=masked_lm_loss,
logits=lm_logits,
past_key_values=outputs.past_key_values,
decoder_hidden_states=outputs.decoder_hidden_states,
decoder_attentions=outputs.decoder_attentions,
cross_attentions=outputs.cross_attentions,
encoder_last_hidden_state=outputs.encoder_last_hidden_state,
encoder_hidden_states=outputs.encoder_hidden_states,
encoder_attentions=outputs.encoder_attentions,
)
def prepare_inputs_for_generation(
self,
decoder_input_ids,
past=None,
attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
use_cache=None,
encoder_outputs=None,
**kwargs,
):
# cut decoder_input_ids if past is used
if past is not None:
decoder_input_ids = decoder_input_ids[:, -1:]
return {
"input_ids": None, # encoder_outputs is defined. input_ids not needed
"encoder_outputs": encoder_outputs,
"past_key_values": past,
"decoder_input_ids": decoder_input_ids,
"attention_mask": attention_mask,
"head_mask": head_mask,
"decoder_head_mask": decoder_head_mask,
"cross_attn_head_mask": cross_attn_head_mask,
"use_cache": use_cache, # change this to avoid caching (presumably for debugging)
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
return shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
@staticmethod
def _reorder_cache(past, beam_idx):
reordered_past = ()
for layer_past in past:
# cached cross_attention states don't have to be reordered -> they are always the same
reordered_past += (
tuple(
past_state.index_select(0, beam_idx)
for past_state in layer_past[:2]
)
+ layer_past[2:],
)
return reordered_past
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20905/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20904/events
|
https://github.com/huggingface/transformers/pull/20904
| 1,511,225,731
|
PR_kwDOCUB6oc5GN4DV
| 20,904
|
Don't call deprecated method
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Call `pad_image` instead of `pad` which has been deprecated in order to maintain consistent method naming across image processors.
There's no difference in logic, as `pad` calls `pad_image`, it just reduces excessive logging raised in [this comment](https://github.com/huggingface/transformers/pull/20425#issuecomment-1364747167).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20904/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20904",
"html_url": "https://github.com/huggingface/transformers/pull/20904",
"diff_url": "https://github.com/huggingface/transformers/pull/20904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20904.patch",
"merged_at": 1672851551000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20903/events
|
https://github.com/huggingface/transformers/issues/20903
| 1,511,081,962
|
I_kwDOCUB6oc5aEUfq
| 20,903
|
Informer - Transformer For Time-Series Forecasting
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"thanks @elisim for the issue... indeed i have informer in my list of models to port over. I have the initial implementation of informer and other done: https://github.com/kashif/pytorch-transformer-ts and will move them over to the transformers API",
"Wow! saw your repo and it looks great! Maybe I might help? :) I sent you an email. \n\nThanks, \nEli ",
"merged in https://github.com/huggingface/transformers/pull/21099 "
] | 1,672
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# Model description
Following the new support for Time Series Transformers in the [API](https://huggingface.co/docs/transformers/model_doc/time_series_transformer) (and the great blog by @NielsRogge and @kashif [here](https://huggingface.co/blog/time-series-transformers)), I propose adding "Informer" - AAAI 2021 Best Paper model.
* Paper: [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
](https://arxiv.org/abs/2012.07436)
* Model implementation: https://github.com/zhouhaoyi/Informer2020
## Why this model?
Compared to other forcasting transformers (see below), Informer seems to be the most "code-stable" one, with the most starts & forks in github.
Popular forcasting Transformes, with link to the repository:
[LogTrans](https://github.com/mlpotter/Transformer_Time_Series) - NIPS 2019
[Informer](https://github.com/zhouhaoyi/Informer2020) - AAAI 2021 (Best Paper)
[Autoformer](https://github.com/thuml/Autoformer) - NIPS 2021
[Pyraformer](https://github.com/alipay/Pyraformer) - ICLR 2022
[FEDformer](https://github.com/MAZiqing/FEDformer) - ICML 2022
This list based on the paper: [Are Transformers Effective for Time Series Forecasting?](https://arxiv.org/abs/2205.13504) (AAAI-23)
I would like to implement the model :)
Thank you,
Eli
### Open source status
- [X] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
@zhouhaoyi - repository creator
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20903/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20902/events
|
https://github.com/huggingface/transformers/pull/20902
| 1,511,076,512
|
PR_kwDOCUB6oc5GNXF_
| 20,902
|
Cache size limit for generation
|
{
"login": "Natooz",
"id": 56734983,
"node_id": "MDQ6VXNlcjU2NzM0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Natooz",
"html_url": "https://github.com/Natooz",
"followers_url": "https://api.github.com/users/Natooz/followers",
"following_url": "https://api.github.com/users/Natooz/following{/other_user}",
"gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natooz/subscriptions",
"organizations_url": "https://api.github.com/users/Natooz/orgs",
"repos_url": "https://api.github.com/users/Natooz/repos",
"events_url": "https://api.github.com/users/Natooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Natooz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20902). All of your documentation changes will be reflected on that endpoint.",
"Hey @Natooz 👋 \r\n\r\nThank you for the PR! Looking at the PR, it is not too complex... but given the non-existent demand, it still amounts to a terrible maintenance-per-demand ratio 🙈 Our team is small, so we have to be extremely picky.\r\n\r\nI am afraid that I will have to reject this PR. Nevertheless, I am happy to be proved wrong, and if I see demand for this feature I will come back to this PR as a reference implementation!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Following #20767, it adds a `cache_limit` argument for `generate` for PyTorch and TensorFlow (except xla), limiting the size of the cache (`past_key_values`).
`position_ids` is stored in `model_kwargs` for concerned models.
This is a bit above 100 lines. No big deal if you consider the maintenance effort is not worth it, this is still a simple feature that can be implemented by users by overriding model methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #20767
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante & @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20902/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20902",
"html_url": "https://github.com/huggingface/transformers/pull/20902",
"diff_url": "https://github.com/huggingface/transformers/pull/20902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20902.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20901/events
|
https://github.com/huggingface/transformers/pull/20901
| 1,511,068,496
|
PR_kwDOCUB6oc5GNVb2
| 20,901
|
🚨🚨 Generate: correct beam search best possible score computation and handling
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh regarding the original issue (https://github.com/huggingface/transformers/issues/18149) -- the problem was not TF with too many beam search iterations, but rather PT with not enough 😅 After this fix, in the example you shared (which I paste below, for reference), both PT and TF run >300 steps to conclude that \"bonjour\" is the answer. Please note that TF includes the padding in its output (as opposed to PT, which doesn't) because its output tensors are pre-padded and sliced based on the number of iterations, whereas in PT they are growing tensors that can be stored as candidate outputs without padding.\r\n\r\n`early_stopping=True` can be used with TF for quicker results.\r\n\r\n___________________________________________________\r\npython example:\r\n```python\r\nfrom transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel\r\nimport tensorflow as tf\r\n\r\nmodel_name = \"Helsinki-NLP/opus-mt-en-ROMANCE\"\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\ntext_in = ['>>fr<< hello']\r\n\r\n# PT generates a few tokens then stops early -> very fast\r\nmodel = MarianMTModel.from_pretrained(model_name)\r\nbatch = tokenizer(text_in, return_tensors='pt', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\n# TF generates 512 tokens, although the decoded version gives the same result as PT -> very slow\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=False)\r\nbatch = tokenizer(text_in, return_tensors='tf', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n```",
"That's a great find! Well done, on finding the inconsistency here. \r\n\r\nWhile this change is mathematically completely correct, I'm a bit worried whether it leads to bad/annoying side-effects in practice. I think most people don't think too deeply about `length_pentalty` and just use a parameter that works \"well enough\". \r\n\r\nThere are some problems here I think:\r\n- 1.) As noted the default case is `length_penalty=1.0` and `do_early_stopping=False` which means that this PR changes the default case of all beam search applications. While it will certainly always improve \"mathematically\" the output result there are two problems in practice:\r\n- 1.1) Some people have probably unknowingly found a high `length_penalty` to work reasonably well. A high `length_penalty` combined with a high `max_length` can now lead to the beam search giving some super long results as the best solution (which would be mathematically correct given the high `length_penalty`, but I don't think people understand/understood the length penalty well enough to understand why this is). \r\n- 1.2) Beam search will now always run much much longer if `max_length` is very high (there are lots of models with set `max_length` to something like 128 or even 256 for short sentence tasks like `translation`. \r\n- 2.) (smaller problem) - we were trying to move away from having to require `max_length` overall - ideally the user should be able to use **any kind** of stopping criteria with beam search.\r\n\r\n\r\n2.) is not a big problem, but I'm a bit worried that 1.) is one. What do you think about 1.) @gante - especially when looking at generation configs like the one of BART (the model is downloaded a lot and has many \"derivation\" models):\r\n- https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L42\r\n\r\n\r\nThe change here is definitely logically/mathematically correct, but I'm worried that it has too many negative effects. It's also a bit unreasonable when doing the math:\r\n```\r\nbest_running_score = state.running_scores[:, -1:] / (max_length**length_penalty)\r\n```\r\nfor `max_length=256` and `length_penalty=2` will essentially make beam search rarely stop before the end `x/(256*256)` = `x/65536` is very low for log-probs no? Or do log-probs became extremely large as soon as the text becomes bad?\r\n\r\n\r\nOn the other hand, maybe the log probs become very quickly so low for bad results that this change doesn't have that much of an impact. Can we maybe run some tests here @gante ? Maybe with the default setting of https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L42 . If there are no major changes in outputs, ok to merge for me! \r\n\r\nAlso should we maybe add a warning \"We detected that you use `length_penalty > 1.0` which strongly encourages long sequences to be generated. Recently there has been a change that might cause your generation to last longer than expected and lead to different results. You might want to consider lowering the `length_penalty`.\" \r\n? ",
"@patrickvonplaten I agree entirely with your points above. Yes, these changes are technically correct, but the cost can be quite high -- here's a rundown of the results in a few models, for the PT changes:\r\n1. Models with `early_stopping=True` in the config, such as `facebook/bart-large-cnn`: no output change, same number of beam search iterations 👍\r\n2. Models with `early_stopping=False` in the config, such as Marian or T5: no output change, one order of magnitude (!) more iterations for short inputs 🙅 This is because of what you wrote above -- the `best_running_score` can stay very high for a large number of iterations, even with `length_penalty=1.0`.\r\n\r\nThis probably means that the output text will only see changes in corner cases, which removes some of our concerns regarding this PR. However, the additional computational cost can be prohibitively high in some typical applications. That will likely create annoyed users, which does not seem wise.\r\n____________________________________________\r\n\r\nSo, what can we do here? \r\na) Don't merge some or all of the changes, especially on the PT side, since they introduce unwanted (although correct) behavior. [probably not great, as we would be intentionally keeping a bug in the code]\r\nb) Add warnings so that users pick the right flags. [users ignore warnings most of the time...]\r\nc) Add some flag and/or `transformers` version gating, to keep the old behavior. [adds complexity, undesirable and, like b), requires users to use flags]\r\nd) Update the default `length_penalty` to `0.0`, which stops biasing beam search toward long searches. In the examples I tried, this keeps the same outputs while not causing the number of beam search iterations to grow with this PR. [changing a default can be tricky, and some models might rely on `length_penalty=1.0` to get the expected output. On the plus side, most users intuitively think that a positive `length_penalty` promotes shorter sentences, which is not true, so we might be killing two birds with one stone]\r\ne) Update the default of `early_stopping` to `True`. [similar to d), but less good imo]\r\n\r\nI struggle to see a good compromise solution 🤔 Given that many research groups use our code to conduct research, I'd like to avoid a) (i.e. keeping the bug). For downstream users, assuming that most wouldn't react to announcements, we will have to pick between keeping a bug or risking changing behavior :(\r\n\r\nPersonally, I'd go with d), but it is extremely debatable (and you folks probably have more experience).\r\n\r\nP.S.: TF XLA benchmarks showed that it was not much faster with beam search, compared to PT. Maybe this problem explains part of it!",
"Hmmm, ok this is a very tricky one then :-/ \r\n\r\n`length_penalty` is a pretty important parameter, and it's somewhat natural IMO to bias the model to slightly prefer longer output lengths (as longer output sequences always have <= log prob than shorter sequences). I think especially summarization models gain performance from using a length penalty.\r\n\r\nJust to better understand, are there a lot of cases where the current implementation (the correct use of length penalty) leads to better results? Could you maybe post some failure cases of the current implementation? ",
"Another option would be to frame everything as setting a \"lower bound\". \r\n\r\nCurrently, we have a \"heustic lower bound\" in PT, another option as done is this PR is a \"absolute lower bound\"",
"@patrickvonplaten some data about a potential `length_penalty` change -- I've tried setting the default to `0.0` (from `1.0`), and run our test suite for potentially impacted tests. More precisely, running `NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 py.test tests/ -k WORD -vv`, with `WORD = {beam_search, summ, translat}`, which catches most (or all) of the hard beam search tests on all 3 frameworks, had the following results:\r\n- 810 tests ran in total, including the challenging generate tests for beam search\r\n- 4 failed due to GPU OOM\r\n- 1 TF test failed (on T5-small, a translation outcome was ruined by the change -- `Ich liebe es so sehr!` to `!` )\r\n- 1 PT test failed (on a pipeline test, a translation had 1 differing character but was equally correct -- `هذا اختبار` to `هذا إختبار`)\r\n\r\nLooking at the catastrophic failure in the TF test, having the right `length_penalty` does make a difference, so a change may result in very annoyed users 👎 \r\n_____________________________________________________\r\n\r\nI like the \"lower bound\" framing, with users being able to pick how precise they want to be in their beam search while keeping the current defaults. However, I'm reluctant to add yet another flag. We *could* change the `early_stopping` flag from a binary one to a ternary one (like the `verbose` flag in many CLIs), as it already controls how long beam search runs. Something like:\r\n1. [no change] `early_stopping = 0` would be equivalent to `early_stopping = false` (on PyTorch, i.e. stops in a few iterations because it does not consider the `max_length` when computing the best score). This would be the default;\r\n2. [no change] `early_stopping = 1` would be equivalent to `early_stopping = true`;\r\n3. [new] `early_stopping = -1` would be the mathematically correct (yet ineffective) best possible score computation.\r\n\r\nThat way:\r\n1. TF/FLAX would start behaving like PT, running fewer beam search iterations by default with minimal impact on the output;\r\n2. PT users would see no changes;\r\n3. Users still have the option of setting the mathematically correct version of beam search.\r\n\r\nWDYT?",
"Nice good idea! I like the idea of using `early_stopping` to decide what do here! Would probably slightly favor:\r\n\r\n`early_stopping: Union[bool, str] = {False, True, \"never\"}`\r\n\r\nGuess we have to leave the reasoning of `False` as is for PyTorch. Using 1,0,-1 is also ok for me, but think it's nicer for the user to make early_stopping accept both str and bool",
"Applied the contents of the discussion in #21368, closing this one."
] | 1,672
| 1,684
| 1,675
|
MEMBER
| null |
# What does this PR do?
As initially uncovered by @ydshieh in #20853, there is a gross TF/PT mismatch on the number of steps beam search takes under some circumstances. In practice, all three frameworks had a different and incomplete implementation (see below why), and this PR fixes it.
Added "🚨🚨" to the title, as this PR may change the output of beam search.
### Rationale:
We know that logprobs is a negative value, and we want to maximize it in beam search (i.e. make it as close to 0 as possible). Since logprobs is always negative, and the final score is the sum of the logprobs, we can anticipate the best possible score a running sequence can ever achieve, and use it to terminate beam search early with no drawback (without this shortcut, beam search will always run `max_length` steps unless `early_stopping=True`). Well, it turns out that the method to compute the best possible score depends on the signal of `length_penalty`, and we are not accounting for that!
- Scenario 1, `length_penalty > 0.0`: In this case, as the sentence grows, the denominator grows as well. This means the score can get closer to 0 (i.e. higher) as the sentence grows, and longer sentences are promoted. In this case, the best possible score can be determined from the maximum sequence length (original TF/FLAX implementation).
- Scenario 2, `length_penalty < 0.0`: In this case, as the sentence grows, the denominator gets smaller. This means the score will get farther away to 0 (i.e. lower) as the sentence grows, and shorter sentences are promoted. In this case, the best possible score can be determined from the current sequence length (original PT implementation).
On top of this, FLAX and TF were incorrectly terminating early when `batch_size > 1`: we were saying that a score improvement was no longer possible as soon as one of the batch members could no longer improve (as opposed to all batch members can no longer improve).
Finally, there was an issue with TF where early stopping was not correctly triggered (my bad).
In summary, for different reasons, all frameworks were stopping beam search incorrectly under certain circumstances:
1. PT: when `length_penalty > 0.0` (which is the default case!)
2. Flax: with `batch_size > 1` || `length_penalty < 0.0`
3. TF: with `batch_size > 1` || `length_penalty < 0.0` || incorrect (missing) early stopping trigger.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20901/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20901",
"html_url": "https://github.com/huggingface/transformers/pull/20901",
"diff_url": "https://github.com/huggingface/transformers/pull/20901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20901.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20900/events
|
https://github.com/huggingface/transformers/pull/20900
| 1,511,067,827
|
PR_kwDOCUB6oc5GNVSl
| 20,900
|
fix docs typos in "add_new_model"
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,678
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a typo or improves the docs:
I changed typos in the "add_new_model" docs, from "Jupiter" to "Jupyter"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20900/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20900",
"html_url": "https://github.com/huggingface/transformers/pull/20900",
"diff_url": "https://github.com/huggingface/transformers/pull/20900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20900.patch",
"merged_at": 1672127356000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20899/events
|
https://github.com/huggingface/transformers/issues/20899
| 1,510,995,243
|
I_kwDOCUB6oc5aD_Ur
| 20,899
|
TFGPT2ForSequenceClassification.from_pretrained with num_labels parameter creates a model with reversed layers order
|
{
"login": "justnoxx",
"id": 2946069,
"node_id": "MDQ6VXNlcjI5NDYwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2946069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justnoxx",
"html_url": "https://github.com/justnoxx",
"followers_url": "https://api.github.com/users/justnoxx/followers",
"following_url": "https://api.github.com/users/justnoxx/following{/other_user}",
"gists_url": "https://api.github.com/users/justnoxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justnoxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justnoxx/subscriptions",
"organizations_url": "https://api.github.com/users/justnoxx/orgs",
"repos_url": "https://api.github.com/users/justnoxx/repos",
"events_url": "https://api.github.com/users/justnoxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/justnoxx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @justnoxx 👋 \r\n\r\nThe layers are being called in the expected order, as you can see in the [model's forward pass](https://github.com/huggingface/transformers/blob/bbcd961897aa6cc439ef4cca5cef6db4283c5b76/src/transformers/models/gpt2/modeling_tf_gpt2.py#L1139). \r\n\r\n`model.summary()` is not fully compatible with our models because we rely on [Keras model subclassing](https://keras.io/api/models/), as opposed to Keras sequential/functional API (whose `model.summary()` produces the expected output).",
"@gante now I see it, thanks a lot for your help. Closing this issue now."
] | 1,672
| 1,672
| 1,672
|
NONE
| null |
### System Info
platform: macos, m1 max
python version: 3.9.13
transformers version: 4.25.1
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following code:
```python
from transformers import GPT2Tokenizer, TFGPT2ForSequenceClassification
model = TFGPT2ForSequenceClassification.from_pretrained('gpt2-medium', num_labels=30)
model.summary()
```
It shows then:
```
Model: "tfgpt2_for_sequence_classification_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
score (Dense) multiple 30720
transformer (TFGPT2MainLaye multiple 354823168
r)
=================================================================
Total params: 354,853,888
Trainable params: 354,853,888
Non-trainable params: 0
```
So as you can see dense layer goes before transformer layer, it should be opposite.
Also there is a Colab link: https://colab.research.google.com/drive/1MeNzHHXnccLAkNlSRpWELQhWF5Y7aIyS#scrollTo=hIbZMVd7xumr
### Expected behavior
I think that the dense layer should go after the transformer layer so that model could be trained. Looks like that something like this has been brought up some time ago:
https://github.com/huggingface/transformers/issues/11515
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20899/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20965/events
|
https://github.com/huggingface/transformers/issues/20965
| 1,516,264,032
|
I_kwDOCUB6oc5aYFpg
| 20,965
|
Improve Mlflow Callbacks documentation.
|
{
"login": "y1450",
"id": 107429941,
"node_id": "U_kgDOBmdANQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107429941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y1450",
"html_url": "https://github.com/y1450",
"followers_url": "https://api.github.com/users/y1450/followers",
"following_url": "https://api.github.com/users/y1450/following{/other_user}",
"gists_url": "https://api.github.com/users/y1450/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y1450/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y1450/subscriptions",
"organizations_url": "https://api.github.com/users/y1450/orgs",
"repos_url": "https://api.github.com/users/y1450/repos",
"events_url": "https://api.github.com/users/y1450/events{/privacy}",
"received_events_url": "https://api.github.com/users/y1450/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,672
| 1,672
| 1,672
|
NONE
| null |
I recently followed https://julsimon.medium.com/using-mlflow-with-hugging-face-transformers-4f69093a6c04 , transformers documentation on mlflow callback is not formatted properly.

It is better to read from the source code https://github.com/huggingface/transformers/blob/accad48e5b4a98302ea396b9f15c5f1c987b6f7f/src/transformers/integrations.py#L894 than the documentation site.

I am sure that in the past, I might have seen such examples over other classes. I thought may be I should report it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20965/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20898/events
|
https://github.com/huggingface/transformers/issues/20898
| 1,510,901,516
|
I_kwDOCUB6oc5aDocM
| 20,898
|
Keep getting ChildFailedError Error in distributed Eval/Train
|
{
"login": "IdoAmit198",
"id": 51640016,
"node_id": "MDQ6VXNlcjUxNjQwMDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51640016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IdoAmit198",
"html_url": "https://github.com/IdoAmit198",
"followers_url": "https://api.github.com/users/IdoAmit198/followers",
"following_url": "https://api.github.com/users/IdoAmit198/following{/other_user}",
"gists_url": "https://api.github.com/users/IdoAmit198/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IdoAmit198/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IdoAmit198/subscriptions",
"organizations_url": "https://api.github.com/users/IdoAmit198/orgs",
"repos_url": "https://api.github.com/users/IdoAmit198/repos",
"events_url": "https://api.github.com/users/IdoAmit198/events{/privacy}",
"received_events_url": "https://api.github.com/users/IdoAmit198/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Update: after reading this [thread](https://github.com/facebookresearch/detectron2/issues/3319), tried to add the following exports:\r\n>export NCCL_IB_DISABLE=1\r\nexport NCCL_P2P_DISABLE=1\r\n\r\nBut no luck.\r\nThe warnings regards `NCCL_P2P` disappeared and the same error remains.",
"Thanks for your report. I have no idea why PyTorch gobbles the error message, but without any clue in the logs there is little we can do to investigate if you don't share the script you are running.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,672
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, multiple GPUs on a single node.
- Using distributed or parallel set-up in script?: Yes. Running my script with `torchrun`.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm running a slightly modified [run_clm.py script](https://github.com/huggingface/transformers/blob/v4.24.0/examples/pytorch/language-modeling/run_clm.py) with vary number of A100 GPUs (2-8) on a single node, and keep getting the ChildFailedError right after the training/evaluation ends.
I’m running [GPT2 (smallest model)](https://huggingface.co/gpt2) on the [OpenWebText dataset](https://huggingface.co/datasets/openwebtext).
### An example how I run my code from a shell script is as follow:
> GPU=0,1,2,3,4,5
export TORCH_CPP_LOG_LEVEL=INFO NCCL_DEBUG=INFO
export CUDA_VISIBLE_DEVICES=$GPU
>
> torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node=${NUM_GPU} \
run_clm.py \
--model_name_or_path ${MODEL} \
--dataset_name ${DS_NAME} \
--preprocessing_num_workers 16 \
--logging_steps 5000 \
--save_steps ${SAVE_STEPS} \
--do_eval \
--per_device_eval_batch_size ${EVAL_BATCH} \
--seed ${RANDOM} \
--evaluation_strategy steps \
--logging_dir ${OUTPUT_DIR} \
--output_dir ${OUTPUT_DIR} \
--overwrite_output_dir \
### And getting the follow error:
> 100%|██████████| 3155/3155 [43:38<00:00, 1.20it/s]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 650 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 651 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 652 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 653 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 654 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 655 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 649) of binary: /venv/bin/python3
Traceback (most recent call last):
File "/venv/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 761, in main
run(args)
File "/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
./code/gpt2/Model-Compression-Research-Package/examples/transformers/language-modeling/run_clm.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2022-12-26_10:59:55
host : distributed-05-pt266-zvdww
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 649)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 649
============================================================
### Full log is attached:
[eval_log.txt](https://github.com/huggingface/transformers/files/10303349/eval_log.txt)
### Notes:
1. The error occurs in training and in evaluation.
2. I tried to run using torchrun and using torch.distributed.launch and faced the same issue.
3. The number of samples in my training/eval doesn’t affect and the issue remain.
4. I track my memory usage and OOM is not the case here (kinda wish it was).
5. The error occurs only in distributed setup. When not using distributed, or when using it with a single GPU, the problem doesn't pop.
6. The error doesn't reproduce in much smaller dataset, such as wikitext-2. In this case both train and eval works in distributed setup.
### Expected behavior
Expect evaluation/training to finish successfully, log results (sample per second, loss, perplexity, etc..) and save json files of results, as I achieve in non-distributed setup.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20898/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20897/events
|
https://github.com/huggingface/transformers/pull/20897
| 1,510,813,399
|
PR_kwDOCUB6oc5GMdyp
| 20,897
|
Update flan-t5 original model link
|
{
"login": "kamalkraj",
"id": 17096858,
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalkraj",
"html_url": "https://github.com/kamalkraj",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,672
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
Update flan-t5 original model link
@sgugger
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20897/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20897",
"html_url": "https://github.com/huggingface/transformers/pull/20897",
"diff_url": "https://github.com/huggingface/transformers/pull/20897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20897.patch",
"merged_at": 1672125974000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20896/events
|
https://github.com/huggingface/transformers/issues/20896
| 1,510,744,789
|
I_kwDOCUB6oc5aDCLV
| 20,896
|
device_map='auto' gives bad results
|
{
"login": "youngwoo-yoon",
"id": 9062897,
"node_id": "MDQ6VXNlcjkwNjI4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9062897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youngwoo-yoon",
"html_url": "https://github.com/youngwoo-yoon",
"followers_url": "https://api.github.com/users/youngwoo-yoon/followers",
"following_url": "https://api.github.com/users/youngwoo-yoon/following{/other_user}",
"gists_url": "https://api.github.com/users/youngwoo-yoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youngwoo-yoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youngwoo-yoon/subscriptions",
"organizations_url": "https://api.github.com/users/youngwoo-yoon/orgs",
"repos_url": "https://api.github.com/users/youngwoo-yoon/repos",
"events_url": "https://api.github.com/users/youngwoo-yoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/youngwoo-yoon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @youngwoo-yoon \r\n\r\nThanks for the issue! \r\nWhat is your version of `accelerate` ? With the latest version (`0.15.0`) & same pytorch version I get (on a NVIDIA T4) on the minimal test example shared above that uses `device_map=auto` :\r\n```\r\nHello, nice to meet you. How are you?\r\n\r\nI’m a bit of a newbie to the world of web development, but I\r\n```",
"Hello, @younesbelkada \r\nI'm using the same version `0.15.0` of `accelerate`.\r\nI also got the correct result when I ran with `export CUDA_VISIBLE_DEVICES=0`\r\nStill wrong results with two GPUS `export CUDA_VISIBLE_DEVICES=0,1`",
"Thanks for the details! I still did not managed to reproduce, can you try this snippet instead:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_name = 'EleutherAI/gpt-neo-125M'\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, device_map={\"transformer.wte\":0, \"transformer.wpe\":0, \"transformer.h\":1, \"transformer.ln_f\":1, \"lm_head\":1})\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nsentence = 'Hello, nice to meet you. How are'\r\nwith torch.no_grad():\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n gen_tokens = model.generate(tensor_input, max_length=32)\r\n generated = tokenizer.batch_decode(gen_tokens)[0]\r\n\r\nprint(generated)\r\n```\r\nand let me know if the problem still persists? \r\nWe're using the same Pytorch, `transformers`, `accelerate` version. The only difference is on the hardware (I am using 2xNvidia T4) \r\nCan you also try your script with `export CUDA_VISIBLE_DEVICES=1` instead of `export CUDA_VISIBLE_DEVICES=0`?",
"Thanks for the quick replies.\r\nThis is the result and it still doesn't look good.\r\n```\r\nHello, nice to meet you. How are!!!!!!!!!!!!!!!!!!!!!!!\r\n```\r\nMy original test code with `export CUDA_VISIBLE_DEVICES=1` gives the same correct result with `export CUDA_VISIBLE_DEVICES=0`\r\n```\r\nHello, nice to meet you. How are you?\r\n\r\nI’m a bit of a newbie to the world of web development, but I\r\n```",
"I am slightly unsure here about what could be causing the issue but I suspect it's highly correlated to the fact that you're running your script under two RTX A6000 but not sure\r\n@sgugger do you think that the problem can be related to `accelerate` & the fact that the script is running under two RTX A6000 instead of another hardware (i.e. have you seen similar discrepancy errors in the past)? \r\n@youngwoo-yoon could you ultimately try the script with the latest pytorch version (1.13.1)?",
"@younesbelkada, I got the same wrong result with PyTorch 1.13.1.\r\n```\r\nHello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy\r\n```",
"Mmmm there is no reason for the script to give different results for different GPUs, especially since removing the device_map=\"auto\" gives the same results.\r\n\r\nI also can't reproduce on my side. Are you absolutely certain your script is launched in the same Python environment you are reporting? E.g. can you print the versions of Accelerate/Transformers/Pytorch in the same script?",
"I put the test scripts using cpu, gpu0, gpu1, and device_map=auto on a single python file to be sure.\r\n```\r\nfrom importlib.metadata import version\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nprint('torch', version('torch'))\r\nprint('transformers', version('transformers'))\r\nprint('accelerate', version('accelerate'))\r\nprint('# of gpus: ', torch.cuda.device_count())\r\n\r\n# cpu\r\nmodel_name = 'EleutherAI/gpt-neo-125M'\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nsentence = 'Hello, nice to meet you. How are'\r\nwith torch.no_grad():\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n gen_tokens = model.generate(tensor_input, max_length=32)\r\n generated = tokenizer.batch_decode(gen_tokens)[0]\r\n\r\nprint(generated)\r\nprint('-------------------------------------------')\r\n\r\n# on the gpu 0\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\nmodel = model.to('cuda:0')\r\n\r\nwith torch.no_grad():\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n tensor_input = tensor_input.to('cuda:0')\r\n gen_tokens = model.generate(tensor_input, max_length=32)\r\n generated = tokenizer.batch_decode(gen_tokens)[0]\r\n\r\nprint(generated)\r\nprint('-------------------------------------------')\r\n\r\n# on the gpu 1\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\nmodel = model.to('cuda:1')\r\n\r\nwith torch.no_grad():\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n tensor_input = tensor_input.to('cuda:1')\r\n gen_tokens = model.generate(tensor_input, max_length=32)\r\n generated = tokenizer.batch_decode(gen_tokens)[0]\r\n\r\nprint(generated)\r\nprint('-------------------------------------------')\r\n\r\n# with device_map=auto\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')\r\n\r\nwith torch.no_grad():\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n gen_tokens = model.generate(tensor_input, max_length=32)\r\n generated = tokenizer.batch_decode(gen_tokens)[0]\r\n\r\nprint(generated)\r\n```\r\n\r\nAnd this the result\r\n\r\n```\r\ntorch 1.13.1\r\ntransformers 4.25.1\r\naccelerate 0.15.0\r\n# of gpus: 2\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nHello, nice to meet you. How are you?\r\n\r\nI’m a bit of a newbie to the world of web development, but I\r\n-------------------------------------------\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nHello, nice to meet you. How are you?\r\n\r\nI’m a bit of a newbie to the world of web development, but I\r\n-------------------------------------------\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nHello, nice to meet you. How are you?\r\n\r\nI’m a bit of a newbie to the world of web development, but I\r\n-------------------------------------------\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\n/home/user/anaconda3/envs/task_temp/lib/python3.10/site-packages/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.\r\n warnings.warn(\r\nHello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy\r\n```\r\n\r\nAnd this is `nvidia-smi` results\r\n\r\n```\r\nTue Dec 27 16:57:48 2022 \r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 A100 80GB PCIe Off | 00000000:4F:00.0 Off | 0 |\r\n| N/A 36C P0 47W / 300W | 9MiB / 81251MiB | 0% Default |\r\n| | | Disabled |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 A100 80GB PCIe Off | 00000000:52:00.0 Off | 0 |\r\n| N/A 37C P0 45W / 300W | 9MiB / 81251MiB | 0% Default |\r\n| | | Disabled |\r\n+-------------------------------+----------------------+----------------------+\r\n \r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| 0 N/A N/A 2915 G /usr/lib/xorg/Xorg 4MiB |\r\n| 0 N/A N/A 119486 G /usr/lib/xorg/Xorg 4MiB |\r\n| 1 N/A N/A 2915 G /usr/lib/xorg/Xorg 4MiB |\r\n| 1 N/A N/A 119486 G /usr/lib/xorg/Xorg 4MiB |\r\n+-----------------------------------------------------------------------------+\r\n```",
"There is a warning \r\n\r\n``/home/user/anaconda3/envs/task_temp/lib/python3.10/site-packages/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.\r\n ``\r\n\r\nYou did move the inputs when processing on one of the two GPUs, it might be necessary here too. Could you print the `hf_device_map` attribute of the model and try to move the inputs to cuda device 0 and 1?",
"I moved inputs to cuda:0 and cuda:1 but both gave the same wrong result.\r\nBelow is the output when I moved inputs to cuda:0.\r\n```\r\ntorch 1.13.1\r\ntransformers 4.25.1\r\naccelerate 0.15.0\r\n# of gpus: 2\r\nhf_device_map output: {'transformer.wte': 0, 'lm_head': 0, 'transformer.wpe': 0, 'transformer.drop': 0, 'transformer.h.0': 0, 'transformer.h.1': 0, 'transformer.h.2': 0, 'transformer.h.3': 0, 'transformer.h.4': 0, 'transformer.h.5': 0, 'transformer.h.6': 1, 'transformer.h.7': 1, 'transformer.h.8': 1, 'transformer.h.9': 1, 'transformer.h.10': 1, 'transformer.h.11': 1, 'transformer.ln_f': 1}\r\nThe attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nHello, nice to meet you. How are noiseleanor pressuring retaliate incarcer boundousy]= incarcer incarcer high * Karin�� Annotationsousyousyousy pressuring retaliateousyousyousy\r\n```\r\n\r\nI will try to reproduce this issue on another machine having two GPUs.",
"It works well on another machine with two Quadro 6000 GPUs.\r\nI've tried different `device_map` strategies 'sequential' and 'balanced_low_0', but it still fails when two A100 GPUs are used.\r\n\r\nI ran `accelerate test` command which tests accelerate library but it also failed. It seems like a problem of `accelerate` library.\r\nI found some other people also had problems with A100 GPUs.\r\nRelated issue: https://github.com/huggingface/accelerate/issues/934\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @younesbelkada I got the same error with two V100, with `accelerate` version 0.18.0\r\n`prompt = 'Q: What is the largest animal?\\nA:'`\r\noutput:\r\n```<s>Q: What is the largest animal?\r\nA: The blue whale.\r\nQ: What is the largest animal?\r\nA: The blue whale. It is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ: What is the largest animal?\r\nA: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ: What is the largest animal?\r\nA: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ: What is the largest animal?\r\nA: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ: What is the largest animal?\r\nA: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ: What is the largest animal?\r\nA: The blue whale is the largest animal on Earth. It is also the largest mammal. It is the largest creature that has ever lived.\r\nQ\r\n```\r\n\r\ncode:\r\n```\r\nmodel_path = 'openlm-research/open_llama_3b'\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(model_path)\r\nmodel = LlamaForCausalLM.from_pretrained(\r\n model_path, torch_dtype=torch.float16, device_map='auto'\r\n)\r\n\r\nprompt = 'Q: What is the largest animal?\\nA:'\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\ninput_ids = input_ids.to('cuda')\r\n\r\ngeneration_output = model.generate(\r\n input_ids=input_ids, max_length=400\r\n)\r\nprint(tokenizer.decode(generation_output[0]))\r\n```\r\n\r\nHave you found a solution?",
"I think you should add the prompt which is the same one in the training. Moreover, please note the special token that you add.\r\nExample: \r\nIn the training, I tokenize: \r\n```\r\n`f\"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n ### Input: <s>{input}</s>. \\n### Response: <s>{ouput}</s>\"`\r\n```\r\nAfterward, I used the model: \r\n```\r\ntext = f\"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n ### Input: {input}. \\n### Response: \"\r\nbatch = tokenizer(text, return_tensors='pt', padding=True, return_token_type_ids=False)\r\nwith torch.cuda.amp.autocast():\r\n output_tokens = model.generate(**batch, max_new_tokens=500)\r\ndecode = tokenizer.decode(output_tokens[0], skip_special_tokens=True)\r\ndecode_text = decode[len(text):]\r\nprint(decode_text)\r\n```\r\n\r\nHope to help you!",
"\r\n\r\n\r\n> It works well on another machine with two Quadro 6000 GPUs. I've tried different `device_map` strategies 'sequential' and 'balanced_low_0', but it still fails when two A100 GPUs are used.\r\n> \r\n> I ran `accelerate test` command which tests accelerate library but it also failed. It seems like a problem of `accelerate` library. I found some other people also had problems with A100 GPUs. Related issue: [huggingface/accelerate#934](https://github.com/huggingface/accelerate/issues/934)\r\n\r\n@youngwoo-yoon hi, have you solved this problem? I have the same problem on A100",
"I'm also running into a similar issue, except with A6000s. With 1 A6000 and the rest of the weights on cpu, I get coherent text. With multiple A6000s, I get garbage outputs.",
"I solved this problem by disabling ACS in BIOS.\r\nThis document might be helpful to some of you. \r\nhttps://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html"
] | 1,672
| 1,693
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
- GPUs: two A100
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal test example:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'EleutherAI/gpt-neo-125M'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentence = 'Hello, nice to meet you. How are'
with torch.no_grad():
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
gen_tokens = model.generate(tensor_input, max_length=32)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated)
```
Results:
```
Hello, nice to meet you. How are noise retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy retaliateousy
```
The above result is not expected behavior.
Without `device_map='auto'` at line 5, it works correctly.
Line 5 becomes `model = AutoModelForCausalLM.from_pretrained(model_name)`
Results:
```
Hello, nice to meet you. How are you?
I’m a bit of a newbie to the world of web development, but I
```
My machine has two A100 (80 GB) GPUs, and I confirmed that the model is loaded on two GPUs when I use `device_map='auto'`.
### Expected behavior
Explained above
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20896/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20896/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20895/events
|
https://github.com/huggingface/transformers/issues/20895
| 1,510,702,265
|
I_kwDOCUB6oc5aC3y5
| 20,895
|
Can't access ViTImageProcessor on transformers==4.25.1
|
{
"login": "navinelahi",
"id": 74642469,
"node_id": "MDQ6VXNlcjc0NjQyNDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/74642469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navinelahi",
"html_url": "https://github.com/navinelahi",
"followers_url": "https://api.github.com/users/navinelahi/followers",
"following_url": "https://api.github.com/users/navinelahi/following{/other_user}",
"gists_url": "https://api.github.com/users/navinelahi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navinelahi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navinelahi/subscriptions",
"organizations_url": "https://api.github.com/users/navinelahi/orgs",
"repos_url": "https://api.github.com/users/navinelahi/repos",
"events_url": "https://api.github.com/users/navinelahi/events{/privacy}",
"received_events_url": "https://api.github.com/users/navinelahi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nRelease history is here: https://github.com/huggingface/transformers/releases.\r\n\r\nI just tried it out on Google Colab, it works fine for me. This might be an issue with your environment. Could you uninstall and install Transformers again?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hello, just do: **pip install transformers --upgrade**\r\nthen problem fixed",
"Note that if you are running backlevel Python (e.g. 3.6) `pip install transformers --upgrade` will only get you as far as transformers 4.18"
] | 1,672
| 1,691
| 1,674
|
NONE
| null |
### System Info
ImportError: cannot import name 'ViTImageProcessor' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)
Am I using the wrong version? Should I downgrade? Where can I get the release history in general? I often find discrepancies like this for usage documentation in the website and it's often related to versioning.
### Who can help?
@sgugger @stevhliu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
!pip install transformers==4.25.1
from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel
### Expected behavior
It should work.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20895/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20894/events
|
https://github.com/huggingface/transformers/issues/20894
| 1,510,361,311
|
I_kwDOCUB6oc5aBkjf
| 20,894
|
`max_length` and `max_new_tokens` in `.generate()`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I kind of agree with you, but I think the goal is to get rid of `max_length` and rather use the `max_new_token` in the configuration of the model, which is why we should rather deprecate (as we did with previous version) the usage of both of the arguments",
"In this instance, only one was passed though, so this is clearly a bug :-)",
"Hey @bofenghuang 👋 \r\n\r\nDefinitely an unwanted bug that arose from the ongoing transition to generation config files. Having a look!"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
Hi @gante,
I got some error related to the change of `max_length` and `max_new_tokens` in this PR https://github.com/huggingface/transformers/pull/20388.
For model like Whisper, the `max_length` has already been defined by the max `PositionalEmbedding` length which is 448 (https://huggingface.co/openai/whisper-base/blob/main/config.json#L42).
Sometimes I want to run faster inference by setting a smaller `max_new_tokens`. But I can no more do it with the current change.
### Who can help?
@gante @sanchit-gandhi @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Below is a code snippet to reproduce the behavior.
```python
import torch
from datasets import load_dataset
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-base")
processor = AutoProcessor.from_pretrained("openai/whisper-base")
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio = dataset[0]["audio"]["array"]
inputs = processor(audio, return_tensors="pt")
input_features = inputs["input_features"]
generated_ids = model.generate(inputs=input_features, max_new_tokens=225)
```
### Expected behavior
When running this we see the following stack trace:
```
Using the latest cached version of the module from /home/bhuang/.cache/huggingface/modules/datasets_modules/datasets/hf-internal-testing--librispeech_asr_dummy/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b (last modified on Sun Dec 25 15:33:28 2022) since it couldn't be found locally at hf-internal-testing/librispeech_asr_dummy., or remotely on the Hugging Face Hub.
Found cached dataset librispeech_asr_dummy (/home/bhuang/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr_dummy/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b)
It is strongly recommended to pass the `sampling_rate` argument to this function. Failing to do so can result in silent errors that might be hard to debug.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 15
12 inputs = processor(audio, return_tensors="pt")
13 input_features = inputs["input_features"]
---> 15 generated_ids = model.generate(inputs=input_features, max_new_tokens=225)
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File ~/transformers/src/transformers/generation/utils.py:1230, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1228 generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
1229 elif not has_default_max_length and generation_config.max_new_tokens is not None:
-> 1230 raise ValueError(
1231 "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a"
1232 " limit to the generated output length. Remove one of those arguments. Please refer to the"
1233 " documentation for more information. "
1234 "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)"
1235 )
1237 if generation_config.min_length is not None and generation_config.min_length > generation_config.max_length:
1238 raise ValueError(
1239 f"Unfeasible length constraints: the minimum length ({generation_config.min_length}) is larger than"
1240 f" the maximum length ({generation_config.max_length})"
1241 )
ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)
```
I can set `model.config.max_length = 226` after loading the model to generate with the `max_length` I want. But I think it might be a better choice to enable this option in the `.generate()` function.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20894/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20893
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20893/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20893/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20893/events
|
https://github.com/huggingface/transformers/issues/20893
| 1,510,361,119
|
I_kwDOCUB6oc5aBkgf
| 20,893
|
Finetune BLIP on customer dataset
|
{
"login": "dxlong2000",
"id": 54766384,
"node_id": "MDQ6VXNlcjU0NzY2Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/54766384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dxlong2000",
"html_url": "https://github.com/dxlong2000",
"followers_url": "https://api.github.com/users/dxlong2000/followers",
"following_url": "https://api.github.com/users/dxlong2000/following{/other_user}",
"gists_url": "https://api.github.com/users/dxlong2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dxlong2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dxlong2000/subscriptions",
"organizations_url": "https://api.github.com/users/dxlong2000/orgs",
"repos_url": "https://api.github.com/users/dxlong2000/repos",
"events_url": "https://api.github.com/users/dxlong2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/dxlong2000/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code as we keep issues for bugs and feature requests only.",
"hi @dxlong2000 \r\nThanks for the issue! \r\nCan you open an issue on the forums as suggested by @sgugger , and ping me there? (my handle on the forum is @ybelkada) As I am interested in this question and help you\r\nThanks!",
"Did I tag you in the correct way @younesbelkada?\r\nYou can check here: https://discuss.huggingface.co/t/finetune-blip-on-customer-dataset-20893/28446/2\r\n\r\nThanks @sgugger!\r\n\r\n",
"Thanks I can see the issue now! ",
"Ok I close the issue now!"
] | 1,671
| 1,672
| 1,672
|
NONE
| null |
### System Info
Dear the team,
I was trying to finetune BLIP and so far I got an error, not sure how to solve it. Is it possible that you can give me some advice? Thanks
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from PIL import Image
import requests
from transformers import BlipProcessor, BlipForQuestionAnswering
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
import torch
from PIL import Image
class VQADataset(torch.utils.data.Dataset):
"""VQA (v2) dataset."""
def __init__(self, questions, answers, image_paths, processor):
self.questions = questions
self.answers = answers
self.image_paths = image_paths
self.processor = processor
def __len__(self):
return len(self.questions)
def __getitem__(self, idx):
# get image + text
question = self.questions[idx]
answer = self.answers[idx]
image = Image.open(self.image_paths[idx]).convert("RGB")
text = question
encoding = self.processor(image, text, padding="max_length", truncation=True, return_tensors="pt")
labels = self.processor.tokenizer.encode(
answer, max_length= 512, pad_to_max_length=True, return_tensors='pt'
)
encoding["labels"] = labels
# remove batch dimension
# for k,v in encoding.items(): encoding[k] = v.squeeze()
return encoding
from torch.utils.data import DataLoader
from tqdm import tqdm
def collate_fn(batch):
input_ids = [item['input_ids'] for item in batch]
pixel_values = [item['pixel_values'] for item in batch]
attention_mask = [item['attention_mask'] for item in batch]
labels = [item['labels'] for item in batch]
return batch
questions = list of questions
answers = list of corresponding answers
image_paths = list of paths of corresponding images
train_dataset = VQADataset(questions = questions,
answers = answers,
image_paths = images,
processor=processor)
test_dataset = VQADataset(questions = questions,
answers = answers,
image_paths = images,
processor=processor)
batch_size = 1
train_dataloader = DataLoader(train_dataset, collate_fn=collate_fn, batch_size=batch_size, shuffle=False)
test_dataloader = DataLoader(test_dataset, collate_fn=collate_fn, batch_size=batch_size, shuffle=False)
batch = next(iter(train_dataloader))
print(batch[0].keys()) # dict_keys(['pixel_values', 'input_ids', 'attention_mask', 'labels'])
import copy
test_input = copy.copy(batch[0]).to(device)
outputs = model(**test_input)
```
Example of the input:
```
questions = ["How many cats are there?"]
answers = ["two"]
image_paths = ["./img_125.png"]
```
### Expected behavior
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-27-f4758beea430>](https://localhost:8080/#) in <module>
2
3 test_input = copy.copy(batch[0]).to(device)
----> 4 outputs = model(**test_input)
6 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3024 if size_average is not None or reduce is not None:
3025 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
3027
3028
ValueError: Expected input batch_size (0) to match target batch_size (511).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20893/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20892
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20892/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20892/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20892/events
|
https://github.com/huggingface/transformers/pull/20892
| 1,510,197,986
|
PR_kwDOCUB6oc5GKfV2
| 20,892
|
`MinNewTokensLengthLogitsProcessor` for `.generate` method #20814
|
{
"login": "kotikkonstantin",
"id": 22777646,
"node_id": "MDQ6VXNlcjIyNzc3NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/22777646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kotikkonstantin",
"html_url": "https://github.com/kotikkonstantin",
"followers_url": "https://api.github.com/users/kotikkonstantin/followers",
"following_url": "https://api.github.com/users/kotikkonstantin/following{/other_user}",
"gists_url": "https://api.github.com/users/kotikkonstantin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kotikkonstantin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kotikkonstantin/subscriptions",
"organizations_url": "https://api.github.com/users/kotikkonstantin/orgs",
"repos_url": "https://api.github.com/users/kotikkonstantin/repos",
"events_url": "https://api.github.com/users/kotikkonstantin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kotikkonstantin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on \"Details\" next to \"setup_and_quality\" in the checks section below?",
"> There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on \"Details\" next to \"setup_and_quality\" in the checks section below?\r\n\r\nSkipped:\r\n\r\n\r\nI suppose it's skipped because it keeps previous successful parts of the CI pipeline if it's not unchanged in the last commit",
"> > There seems to be a problem with CI that we'll have to fix before merging. @kotikkonstantin, what fo you see when you click on \"Details\" next to \"setup_and_quality\" in the checks section below?\r\n> \r\n> Skipped: \r\n> \r\n> I suppose it's skipped because it keeps previous successful parts of the CI pipeline if it's not unchanged in the last commit\r\n\r\nI'm not right here. After failed CI-pipeline run, in the following successful CI-pipeline run, `setup and quality` just was stopped instead of launching",
"@kotikkonstantin CircleCI is complaining about terms of service -- are you based in one of the countries linked [here](https://support.circleci.com/hc/en-us/articles/360043679453-CircleCI-Terms-of-Service-Violation-Sanctioned-Country)?",
"It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? You might need to push an empty commit afterward.",
"@gante @sgugger \r\nThank you, guys, for your assistance in approaching it!\r\n\r\n I've filled out [Individual Appeal Form](https://docs.google.com/forms/d/e/1FAIpQLSeaVwzPnt2xREoZxe_ysnmNEJQUfBWrTI1TzkE7bq1h06eHqA/viewform). I hope I get access. If not, could you launch the CI pipeline on your own? ",
"@kotikkonstantin I think we can. Let's try it out:\r\n1 - add me as a contributor to your fork of `transformers`\r\n2 - I will push an empty commit there\r\n3 - maybe CI gets triggered",
"> @kotikkonstantin I think we can. Let's try it out: 1 - add me as a contributor to your fork of `transformers` 2 - I will push an empty commit there 3 - maybe CI gets triggered\r\n\r\n@gante done",
"@gante I can see CI-logs:\r\n<img width=\"1663\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22777646/210263144-92cc0aae-5e86-4547-a6b5-012cd2346a06.png\">\r\n",
"@kotikkonstantin yup, I've took the liberty to run the `make fixup` shell command and push :) (which should fix it)"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
### **Approved** by [#20814 issue](https://github.com/huggingface/transformers/issues/20814)
### What does this PR do?
It implements `MinNewTokensLengthLogitsProcessor` class for enforcing a min-length of **NEW** tokens by setting EOS (end-of-sequence) token probability to 0.
Framework: `pytorch`
### Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20892/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20892",
"html_url": "https://github.com/huggingface/transformers/pull/20892",
"diff_url": "https://github.com/huggingface/transformers/pull/20892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20892.patch",
"merged_at": 1672745342000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20891
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20891/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20891/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20891/events
|
https://github.com/huggingface/transformers/pull/20891
| 1,510,122,032
|
PR_kwDOCUB6oc5GKQba
| 20,891
|
typo fix
|
{
"login": "nathan-barry",
"id": 38043930,
"node_id": "MDQ6VXNlcjM4MDQzOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/38043930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nathan-barry",
"html_url": "https://github.com/nathan-barry",
"followers_url": "https://api.github.com/users/nathan-barry/followers",
"following_url": "https://api.github.com/users/nathan-barry/following{/other_user}",
"gists_url": "https://api.github.com/users/nathan-barry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nathan-barry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nathan-barry/subscriptions",
"organizations_url": "https://api.github.com/users/nathan-barry/orgs",
"repos_url": "https://api.github.com/users/nathan-barry/repos",
"events_url": "https://api.github.com/users/nathan-barry/events{/privacy}",
"received_events_url": "https://api.github.com/users/nathan-barry/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Let me know what I should put in the original comment for typo fixes, am starting to go through the docs and will submit another if I spot any",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
Hello!
I just fixed this tiny typo. Just getting into open source, one day hopefully I can contribute non-trivial PRs.
Happy holidays!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20891/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20891",
"html_url": "https://github.com/huggingface/transformers/pull/20891",
"diff_url": "https://github.com/huggingface/transformers/pull/20891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20891.patch",
"merged_at": 1672038384000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20890
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20890/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20890/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20890/events
|
https://github.com/huggingface/transformers/pull/20890
| 1,510,091,970
|
PR_kwDOCUB6oc5GKKQ-
| 20,890
|
update pyknp to rhoknp
|
{
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Could you please edit the description of the PR to explain the reason for your change? Note that `rhoknp` does not seem to have available wheels for Python3.7 and Python 3.8, which we do support, so switching to this dependency does not seem possible until there is support for more Python versions.",
"@sgugger Sorry for the lack of description. We are currently waiting `rhoknp` to support Python3.7: https://github.com/ku-nlp/rhoknp/issues/93#issuecomment-1364685954\r\nWe will reopen this PR after supporting.",
"@sgugger Hi, I updated `rhoknp` to the newest version which supports Python3.7.\r\nHowever, I don't know why CI died. It seems that the system ran CI twice because I reopened the PR, one passed but one failed...\r\n\r\npassed: https://github.com/huggingface/transformers/actions/runs/3806897762/jobs/6476101506\r\ndied: https://github.com/huggingface/transformers/actions/runs/3806897943/jobs/6476101712\r\n\r\nHave a Happy New Year~"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
- This PR update [pyknp](https://github.com/ku-nlp/pyknp) to [rhoknp](https://github.com/ku-nlp/rhoknp), a newer Jumanpp package for Japanese morphological analysis.
- A bug was found in `pyknp` (see below), which is also confirmed when using `JumanppTokenizer` in `BertJapaneseTokenizer`. `rhoknp` is more robust and it can avoid this bug.
Code for reproduce:
```
from pyknp import Juman
text = "ありがとうございますm(_ _)m見つけるのが大変です。"
jumanpp = Juman()
for mrph in jumanpp.analysis(text).mrph_list():
print(mrph)
```
Error message:
```
Traceback (most recent call last):
...
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/juman.py", line 98, in analysis
return self.juman(input_str, juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/juman.py", line 85, in juman
result = MList(self.juman_lines(input_str), juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/mlist.py", line 29, in __init__
mrph = Morpheme(line, mid, juman_format)
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/morpheme.py", line 81, in __init__
self._parse_spec(spec.strip("\n"))
File "/local/11249119.1.gpu/venv/python38-transformers/lib/python3.8/site-packages/pyknp/juman/morpheme.py", line 145, in _parse_spec
self.hinsi_id = int(parts[4])
ValueError: invalid literal for int() with base 10: 'm(_'
```
We believe the reason is that `pyknp` was made for fullwidth characters, and halfwidth characters `m(_ _)` are not expected. `rhoknp` is more robust and could avoid this bug.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20890/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20890",
"html_url": "https://github.com/huggingface/transformers/pull/20890",
"diff_url": "https://github.com/huggingface/transformers/pull/20890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20890.patch",
"merged_at": 1672467747000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20889
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20889/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20889/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20889/events
|
https://github.com/huggingface/transformers/issues/20889
| 1,510,025,894
|
I_kwDOCUB6oc5aASqm
| 20,889
|
Disable ClearML automatic model uploading
|
{
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Would you like to make a PR using the same kind of environment variable as WandB and CometML to control model logging through clearML?\r\n\r\n(PS: There is no limit to the number of uploaded models on the Hugging Face Hub when you set `push_to_hub=True` ;-) )",
"Yes, I'll try to create a PR :)",
"I'm having the same issue!",
"@sgugger I created a [PR](https://github.com/huggingface/transformers/pull/20969). Can you please review? :)"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### Feature request
Allow users of `transformers` to disable the automatic model uploading of ClearML. Perhaps we can also allow users to write their own integration callbacks in case they want to configure some more stuff.
The place where the saving happens is `src/transformers/integrations.py`:
```
def on_save(self, args, state, control, **kwargs):
if self._clearml_task and state.is_world_process_zero:
ckpt_dir = f"checkpoint-{state.global_step}"
artifact_path = os.path.join(args.output_dir, ckpt_dir)
logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.")
self._clearml_task.update_output_model(artifact_path, iteration=state.global_step, auto_delete_file=False)
```
We should add a condition to the main `if`, similar to what `NeptuneCallback` or `MLflowCallback` is doing.
### Motivation
Several experiments that I ran were interrupted because I reached the max limit of model uploading. However, I was not interested in uploading my models in the first place. Hence, a configuration would be appropriate in this case.
### Your contribution
I can submit a PR if the contributors would help figure out the correct way of handling it :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20889/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20888
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20888/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20888/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20888/events
|
https://github.com/huggingface/transformers/issues/20888
| 1,509,914,497
|
I_kwDOCUB6oc5Z_3eB
| 20,888
|
Unable to import name 'pad_shard_unpad' from 'flax.jax_utils' (clm language modelling flax example)
|
{
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sanchit-gandhi @ArthurZucker, could you help us with this ^",
"Hi @SupreethRao99 \r\n\r\nIndeed the function `pad_shard_unpad` cannot be imported from flax.jax_utils using flax==0.4.2\r\nCan you try with a latest version of `flax`? For example `from flax.jax_utils import pad_shard_unpad` works fine under `flax==0.5.3` | `pip install --upgrade flax` or `pip install flax==0.5.3`",
"Hi @younesbelkada , Upgrading flax to the latest version caused some issues jax and jaxlib but using `flax==0.5.3` along with the latest kaggle runtime (30-12-2022) fixed the issue. Thanks a lot !"
] | 1,671
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.4.88+-x86_64-with-glibc2.2.5
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (tpu)
- Jax version: 0.3.10
- JaxLib version: 0.3.10
- Flax version: 0.4.2
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes (TPUv3-8 1VM Kaggle)
### Who can help?
@sanchit-gandhi @ArthurZucker @younesbelkada @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've been trying to run `transformers/examples/flax/language-modeling/run_clm_flax.py` on Kaggle's new TPUv3-8 1VM type of instance. it's a TPU instance with the TPU devices directly attached.
When I ran the example code in `transformers/examples/flax/language-modeling` for causal language modelling with the following code
```
!python /kaggle/working/transformers/examples/flax/language-modeling/run_clm_flax.py \
--output_dir="<models_direcotry>" \
--model_type="gpt2" \
--config_name="<custom_GPT_Config>" \
--tokenizer_name="<custom_tokenizer>" \
--dataset_name="<path_to_dataset_on_hf_hub>" \
--do_train --do_eval \
--block_size="128" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--num_train_epochs="20" \
--logging_steps="500" \
--save_steps="2500" \
--eval_steps="2500"
```
I'm getting the following error
```
WARNING: Logging before InitGoogle() is written to STDERR
I0000 00:00:1671849395.203410 2540 tpu_initializer_helper.cc:116] libtpu.so is already in use by process with pid 12. Not attempting to load libtpu.so in this process.
2022-12-24 02:36:35.892840: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-12-24 02:36:35.936831: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-12-24 02:36:36.693369: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-24 02:36:36.693463: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-24 02:36:36.693483: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/kaggle/working/transformers/examples/flax/language-modeling/run_clm_flax.py", line 46, in <module>
from flax.jax_utils import pad_shard_unpad, unreplicate
ImportError: cannot import name 'pad_shard_unpad' from 'flax.jax_utils' (/usr/local/lib/python3.8/site-packages/flax/jax_utils.py)
```
I can't upgrade jax, jaxlib, flax as it messes with the TPU's connected causing them to become unavailable.
### Expected behavior
GPT2 training should begin on 8 TPUv3 devices
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20888/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20887
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20887/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20887/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20887/events
|
https://github.com/huggingface/transformers/issues/20887
| 1,509,829,163
|
I_kwDOCUB6oc5Z_ior
| 20,887
|
eval OOM when loading a pretrained model with output_hidden_states set to True for BertForSequenceClassification
|
{
"login": "yunjiangster",
"id": 1061224,
"node_id": "MDQ6VXNlcjEwNjEyMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1061224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunjiangster",
"html_url": "https://github.com/yunjiangster",
"followers_url": "https://api.github.com/users/yunjiangster/followers",
"following_url": "https://api.github.com/users/yunjiangster/following{/other_user}",
"gists_url": "https://api.github.com/users/yunjiangster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunjiangster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunjiangster/subscriptions",
"organizations_url": "https://api.github.com/users/yunjiangster/orgs",
"repos_url": "https://api.github.com/users/yunjiangster/repos",
"events_url": "https://api.github.com/users/yunjiangster/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunjiangster/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You will need to use the [eval_accumulation_steps](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.eval_accumulation_steps) argument in your `TrainingArguments` as it's not possible to accumulate all those tensors coming from the hidden states on the GPU.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-4.15.0-166-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following snippet
```
model = BertForSequenceClassification.from_pretrained('snunlp/KR-BERT-char16424', output_hidden_states=True)
# Now create any training_args, tokenized_train, tokenized_valid, and compute_metrics function (such as those in the offical tutorial).
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_valid,
compute_metrics=compute_metrics,
)
trainer.train()
```
Eval will experience CUDA OOM on a gpu with 24Gb, after about 100 examples.
### Expected behavior
I tried warmstarting a BERT classification model from a pretrained embedding model, which sets output_hidden_states to True in config.json. But eval runs into OOM issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20887/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20886
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20886/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20886/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20886/events
|
https://github.com/huggingface/transformers/pull/20886
| 1,509,571,452
|
PR_kwDOCUB6oc5GIdHo
| 20,886
|
[RobertaPreLayernom] Fixes the CI daily test
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"BTW, this model does not have tf or flax port yet, no?",
"It does 😉 "
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
The checkpoint was not correct, it is a simply typo as the `flax` and `tf` tests were not affected by this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20886/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20886",
"html_url": "https://github.com/huggingface/transformers/pull/20886",
"diff_url": "https://github.com/huggingface/transformers/pull/20886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20886.patch",
"merged_at": 1671821717000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.