url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20084/events
|
https://github.com/huggingface/transformers/pull/20084
| 1,437,320,675
|
PR_kwDOCUB6oc5CRh2R
| 20,084
|
[Docs] Add resources of OpenAI GPT
|
{
"login": "shogohida",
"id": 10365357,
"node_id": "MDQ6VXNlcjEwMzY1MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/10365357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shogohida",
"html_url": "https://github.com/shogohida",
"followers_url": "https://api.github.com/users/shogohida/followers",
"following_url": "https://api.github.com/users/shogohida/following{/other_user}",
"gists_url": "https://api.github.com/users/shogohida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shogohida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shogohida/subscriptions",
"organizations_url": "https://api.github.com/users/shogohida/orgs",
"repos_url": "https://api.github.com/users/shogohida/repos",
"events_url": "https://api.github.com/users/shogohida/events{/privacy}",
"received_events_url": "https://api.github.com/users/shogohida/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.",
"Hi @stevhliu, I added relevant scripts and notebooks so please have a look!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu \r\nI changed the doc following your comments! Please have a look",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20084). All of your documentation changes will be reflected on that endpoint.",
"@stevhliu @sgugger \r\nThanks for your review! Hope to contribute more to transformers! "
] | 1,667
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds resources of OpenAI GPT according to [this issue](https://github.com/huggingface/transformers/issues/20055)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20055 (partially)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20084/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20084",
"html_url": "https://github.com/huggingface/transformers/pull/20084",
"diff_url": "https://github.com/huggingface/transformers/pull/20084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20084.patch",
"merged_at": 1668615452000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20083/events
|
https://github.com/huggingface/transformers/issues/20083
| 1,437,305,820
|
I_kwDOCUB6oc5Vq4vc
| 20,083
|
Where is the Translation template ?
|
{
"login": "bfss",
"id": 31245245,
"node_id": "MDQ6VXNlcjMxMjQ1MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/31245245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bfss",
"html_url": "https://github.com/bfss",
"followers_url": "https://api.github.com/users/bfss/followers",
"following_url": "https://api.github.com/users/bfss/following{/other_user}",
"gists_url": "https://api.github.com/users/bfss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bfss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bfss/subscriptions",
"organizations_url": "https://api.github.com/users/bfss/orgs",
"repos_url": "https://api.github.com/users/bfss/repos",
"events_url": "https://api.github.com/users/bfss/events{/privacy}",
"received_events_url": "https://api.github.com/users/bfss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"There's some relevant issues on this where people encountered the same issue:\r\n* #17404\r\n* #17028 \r\n\r\nFor context, @bfss is stating that the [TRANSLATING.md](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) documentation says:\r\n> To get started, navigate to the [Issues](https://github.com/huggingface/transformers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the \"Translation template\" from the \"New issue\" button.\r\n\r\nHowever, no such issue template exists.\r\n\r\n---\r\n\r\nThat said, there are also some larger issues tracking the progress of the translation of a certain language. Perhaps these can be used as an informal template or guide if no issue for your language exists.\r\n* https://github.com/huggingface/transformers/issues?q=Tranformers+documentation+translation+to+",
"Thank you for your reply~ @tomaarsen ",
"Trying to add the template right now. https://github.com/huggingface/transformers/pull/20199"
] | 1,667
| 1,668
| 1,667
|
CONTRIBUTOR
| null |
I want to translate the doc in leisure time, and I followed the guide, but not found Translation template...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20083/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20082/events
|
https://github.com/huggingface/transformers/issues/20082
| 1,437,174,461
|
I_kwDOCUB6oc5VqYq9
| 20,082
|
Models trained using Deepspeed ZeRO stage 3 have corrupted model weight shape
|
{
"login": "JohnnyRacer",
"id": 77214388,
"node_id": "MDQ6VXNlcjc3MjE0Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/77214388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnnyRacer",
"html_url": "https://github.com/JohnnyRacer",
"followers_url": "https://api.github.com/users/JohnnyRacer/followers",
"following_url": "https://api.github.com/users/JohnnyRacer/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnnyRacer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnnyRacer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnnyRacer/subscriptions",
"organizations_url": "https://api.github.com/users/JohnnyRacer/orgs",
"repos_url": "https://api.github.com/users/JohnnyRacer/repos",
"events_url": "https://api.github.com/users/JohnnyRacer/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnnyRacer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Full error message with all the incorrect layers below:\r\n```\r\nRuntimeError: Error(s) in loading state_dict for OPTForCausalLM:\r\n size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]).\r\n size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2050, 2560]).\r\n size mismatch for model.decoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.0.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.0.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.1.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.1.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.1.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.2.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.2.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.2.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.2.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.2.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.2.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.3.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.3.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.3.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.3.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.3.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.3.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.4.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.4.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.4.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.4.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.4.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.4.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.5.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.5.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.5.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.5.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.5.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.5.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.6.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.6.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.6.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.6.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.6.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.6.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.7.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.7.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.7.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.7.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.7.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.7.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.8.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.8.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.8.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.8.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.8.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.8.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.9.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.9.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.9.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.9.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.9.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.9.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.10.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.10.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.10.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.10.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.10.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.10.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.11.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.11.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.11.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.11.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.11.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.12.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.12.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.12.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.12.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.12.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.12.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.13.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.13.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.13.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.13.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.13.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.13.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.14.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.14.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.14.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.14.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.14.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.14.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.15.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.15.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.15.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.15.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.15.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.15.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.16.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.16.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.16.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.16.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.16.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.16.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.17.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.17.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.17.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.17.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.17.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.17.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.18.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.18.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.18.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.18.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.18.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.18.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.19.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.19.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.19.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.19.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.19.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.19.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.20.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.20.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.20.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.20.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.20.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.20.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.21.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.21.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.21.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.21.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.21.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.21.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.22.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.22.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.22.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.22.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.22.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.22.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.23.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.23.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.23.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.23.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.23.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.23.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.24.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.24.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.24.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.24.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.24.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.24.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.25.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.25.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.25.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.25.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.25.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.25.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.26.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.26.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.26.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.26.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.26.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.26.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.27.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.27.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.27.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.27.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.27.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.27.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.28.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.28.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.28.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.28.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.28.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.28.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.29.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.29.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.29.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.29.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.29.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.29.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.30.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.30.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.30.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.30.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.30.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.30.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for model.decoder.layers.31.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.31.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.31.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.31.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).\r\n size mismatch for model.decoder.layers.31.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).\r\n size mismatch for model.decoder.layers.31.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).\r\n size mismatch for lm_head.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]).\r\n You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.\r\n```",
"cc @pacman100 ",
"Hello @JohnnyRacer, can you provide script of how you are saving and loading weights post training? Also, what is the accelerate config? ",
"Hey @pacman100 . I am using the unmodified `run_clm_no_trainer.py` from the latest commit of `transformers`, with the following training commands : \r\n```\r\naccelerate launch run_clm_no_trainer.py \\ \r\n--model_name_or_path facebook/opt-1.3b \\\r\n--dataset_name wikitext \\\r\n--num_train_epochs 6 \\\r\n--block_size 128 \\\r\n--output_dir ./opt-1.3b-wikitext\r\n```\r\nAccelerate config from config file : \r\n```json\r\n{\r\n \"compute_environment\": \"LOCAL_MACHINE\",\r\n \"deepspeed_config\": {\r\n \"gradient_accumulation_steps\": 1,\r\n \"offload_optimizer_device\": \"cpu\",\r\n \"offload_param_device\": \"none\",\r\n \"zero3_init_flag\": false,\r\n \"zero3_save_16bit_model\": false,\r\n \"zero_stage\": 3\r\n },\r\n \"distributed_type\": \"DEEPSPEED\",\r\n \"downcast_bf16\": \"no\",\r\n \"fsdp_config\": {},\r\n \"gpu_ids\": null,\r\n \"machine_rank\": 0,\r\n \"main_process_ip\": null,\r\n \"main_process_port\": null,\r\n \"main_training_function\": \"main\",\r\n \"mixed_precision\": \"fp16\",\r\n \"num_machines\": 1,\r\n \"num_processes\": 2,\r\n \"rdzv_backend\": \"static\",\r\n \"same_network\": true,\r\n \"use_cpu\": false\r\n}\r\n```\r\nThe config doesn't show it but I have it configured for a multi-GPU setup on a single local instance with 2 accelerators.\r\n\r\nAnd I have tried loading the models with the following commands using `transfomers.AutoModelForCausalLM`\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-1.3b\")\r\nmodel.load_state_dict(torch.load(\"./opt-1.3b-wikitext/pytorch_model.bin\"))\r\n```\r\nAs well as loading the model directly from the directory via the model config by using:\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"opt-1.3b-wikitext\")\r\n```\r\nBoth of these methods results in the same error that I have described above.",
"Hello @JohnnyRacer , please refer below code snippet on changes required when saving deepspeed ZeRO-3 model. The example can be found here: [deepspeed_with_config_support.py](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/deepspeed_with_config_support.py)\r\n\r\nhttps://github.com/huggingface/accelerate/blob/cea6aaa1161d45f7f23ef33fcc3b0a5999ebb5a1/examples/by_feature/deepspeed_with_config_support.py#L712-L723",
"Thanks @pacman100 . Just finished training the model and can confirm loading works correctly with the script you have linked. However I still had to modify the script to include this fix for an issue I had earlier to ensure the weights can be correctly loaded, [link to issue](https://github.com/huggingface/transformers/issues/19959). ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,671
| 1,671
|
NONE
| null |
### System Info
transformers version: 4.21.1 | 4.24.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am currently trying to use deepspeed to finetune a AutoModelForCausalLM model (facebook/opt1.3b) on a multi-GPU instance with ZeRO optimization with the unmodified `run_clm_no_trainer.py` script from the examples. When I use ZeRO stage 2 to train the model, the model weights can be loaded normally. However, when I try using ZeRO stage 3 with CPU offloads for the optimizer weights, the model training proceeds normally with loss values and metrics that make sense. But I get the follow error when I try loading the weights.
```
RuntimeError: Error(s) in loading state_dict for OPTForCausalLM:
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]).
size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2050, 2560]).
size mismatch for model.decoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).
size mismatch for model.decoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).
size mismatch for model.decoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).
size mismatch for model.decoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 2560]).
...
size mismatch for model.decoder.layers.31.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([10240, 2560]).
size mismatch for model.decoder.layers.31.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2560, 10240]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([50272, 2560]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
```
This is very strange as the `torch.Size([0])` error seems to be pervasive across all layers of the model, suggesting the weights are just empty and uninitialized. This is just speculation as the documentation does not seem to address the specifics of training with different ZeRO stages. I have tried the loading the model manually using `AutoModelForCausalLM.from_pretrained('./model_dir')` where `model_dir` is where the weights were saved after training, yet the same error is still thrown. I am not sure if this is a bug or using ZeRO stage 3 is currently unsupported. Any help would be much appreciated.
### Expected behavior
Models trained using ZeRO stage 3 should load correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20081/events
|
https://github.com/huggingface/transformers/issues/20081
| 1,437,063,655
|
I_kwDOCUB6oc5Vp9nn
| 20,081
|
Discrepancy between PegasusTokenizer and PegasusTokenizerFast
|
{
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think this may be the problem of `PegasusTokenizer.decode()`. It indeed add `<\\s>` to the end of the sentence, but decoder fails to keep it even when `skip_special_tokens=False`\r\n<img width=\"819\" alt=\"draft ipynb — selfmem SSH: wict3090 2022-11-06 10-19-50\" src=\"https://user-images.githubusercontent.com/38466901/200150974-1bde2180-8c6a-4ea0-94f8-6e36a20fceca.png\">\r\n",
"I think this was recently fixed by #15775. At least on `google/pegasus-xsum` and the main branch of Transformers, I don't see any differences in the outputs.\r\nNot sure if this is the model you were using since yours is local. Could you give us a repo ID on the Hub if the issue persists on your side?",
"after updating transformers to the latest version by\r\n```shell\r\npip uninstall transformers\r\npip install transformers\r\n```\r\non `google/pegasus-large` and `google/pegasus-xsum`, the problem still exists:\r\n\r\n\r\n",
"Sorry, I didn't notice this PR has not been merged yet.\r\nThis https://github.com/huggingface/transformers/pull/15775 indeed solves this problem. Thanks !\r\n\r\n"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import PegasusTokenizerFast,PegasusTokenizer
toker_fast = PegasusTokenizerFast.from_pretrained("/data/pretrained_model/pegasus_large/")
toker = PegasusTokenizer.from_pretrained("/data/pretrained_model/pegasus_large/")
print(toker_fast.decode(toker_fast.encode("huggingface/transfomer"),skip_special_tokens=False)) ## huggingface/transfomer</s>
print(toker.decode(toker.encode("huggingface/transfomer"),skip_special_tokens=False)) ## huggingface/transfomer
```
### Expected behavior
These two should be the same. I suppose this is the problem of `PegasusTokenizer`. Because EOS token is needed in Generation Task.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20081/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20080/events
|
https://github.com/huggingface/transformers/pull/20080
| 1,437,030,398
|
PR_kwDOCUB6oc5CQpCK
| 20,080
|
[Doctest] Add configuration_dpr.py
|
{
"login": "Saad135",
"id": 22683922,
"node_id": "MDQ6VXNlcjIyNjgzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saad135",
"html_url": "https://github.com/Saad135",
"followers_url": "https://api.github.com/users/Saad135/followers",
"following_url": "https://api.github.com/users/Saad135/following{/other_user}",
"gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saad135/subscriptions",
"organizations_url": "https://api.github.com/users/Saad135/orgs",
"repos_url": "https://api.github.com/users/Saad135/repos",
"events_url": "https://api.github.com/users/Saad135/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saad135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds configuration_dpr.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20080/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20080",
"html_url": "https://github.com/huggingface/transformers/pull/20080",
"diff_url": "https://github.com/huggingface/transformers/pull/20080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20080.patch",
"merged_at": 1667829000000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20079/events
|
https://github.com/huggingface/transformers/issues/20079
| 1,436,894,843
|
I_kwDOCUB6oc5VpUZ7
| 20,079
|
Exception on saving results in official glue example scripts
|
{
"login": "li-plus",
"id": 39846316,
"node_id": "MDQ6VXNlcjM5ODQ2MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-plus",
"html_url": "https://github.com/li-plus",
"followers_url": "https://api.github.com/users/li-plus/followers",
"following_url": "https://api.github.com/users/li-plus/following{/other_user}",
"gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-plus/subscriptions",
"organizations_url": "https://api.github.com/users/li-plus/orgs",
"repos_url": "https://api.github.com/users/li-plus/repos",
"events_url": "https://api.github.com/users/li-plus/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-plus/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Yes, the whole `eval_metric` dict should probably be dumped without accessing keys. Do you want to open a PR with this change?\r\ncc @muellerzr who wrote this.",
"Yeah, I'd like to help. The `eval_metric` should be dumped with all its keys prefixed by `eval_`, just like what `run_glue.py` does.\r\nhttps://github.com/huggingface/transformers/blob/504db92e7da010070c36e185332420a1d52c12b2/examples/pytorch/text-classification/run_glue.py#L573\r\n\r\nI happen to find an example script that already fixed this issue by prefixing all keys in `eval_metric` before saving it.\r\nhttps://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py#L66-L86\r\n\r\n I will create a PR to migrate this solution to all remaining unfixed examples. Is it ok?",
"That would be great, yeah!"
] | 1,667
| 1,668
| 1,668
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-4.14.81.bm.22-amd64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was running the official glue example script `transformers/examples/pytorch/text-classification/run_glue_no_trainer.py` on STS-B task.
```sh
export TASK_NAME=stsb
python run_glue_no_trainer.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--max_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/
```
The training went well, but on saving the results it raised the error below:
```
Configuration saved in /tmp/stsb/config.json
Model weights saved in /tmp/stsb/pytorch_model.bin
tokenizer config file saved in /tmp/stsb/tokenizer_config.json
Special tokens file saved in /tmp/stsb/special_tokens_map.json
Traceback (most recent call last):
File "run_glue_no_trainer.py", line 633, in <module>
main()
File "run_glue_no_trainer.py", line 629, in main
json.dump({"eval_accuracy": eval_metric["accuracy"]}, f)
KeyError: 'accuracy'
```
### Expected behavior
Some of the glue tasks (STS-B, CoLA) don't use "accuracy" as metric. Maybe need to check the metric keys before accessing `eval_metric`.
https://github.com/huggingface/transformers/blob/504db92e7da010070c36e185332420a1d52c12b2/examples/pytorch/text-classification/run_glue_no_trainer.py#L627-L629
BTW, I have noticed that this block of code also appears in lots of other example scripts like multiple-choice, semantic-segmentation, etc. I'm not sure whether those scripts have the same issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20079/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20078/events
|
https://github.com/huggingface/transformers/issues/20078
| 1,436,862,441
|
I_kwDOCUB6oc5VpMfp
| 20,078
|
Converting CLIPText Model (transformers.CLIPTextModel) Embeddings Back to Text
|
{
"login": "mbdzi",
"id": 112744187,
"node_id": "U_kgDOBrhW-w",
"avatar_url": "https://avatars.githubusercontent.com/u/112744187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbdzi",
"html_url": "https://github.com/mbdzi",
"followers_url": "https://api.github.com/users/mbdzi/followers",
"following_url": "https://api.github.com/users/mbdzi/following{/other_user}",
"gists_url": "https://api.github.com/users/mbdzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbdzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbdzi/subscriptions",
"organizations_url": "https://api.github.com/users/mbdzi/orgs",
"repos_url": "https://api.github.com/users/mbdzi/repos",
"events_url": "https://api.github.com/users/mbdzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbdzi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only."
] | 1,667
| 1,668
| 1,668
|
NONE
| null |
### Feature request
Is there a method for converting CLIPText Model (transformers.CLIPTextModel) model embeddings back to text? I looked in the documentation, but could not find anything that addresses this specific query.
I am also interested in finding out the following:
- Are there tools in the Hugging Face ecosystem for calculating the weighted average of embeddings?
- Is there a way to query the CLIP Model using embeddings and not text or image inputs?
### Motivation
I would like to edit prompts mathematically. And, the easiest way to do this would be to get the vector embeddings and to effect the desired mathematical transformations on them.
### Your contribution
I have no contribution beyond my question.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20078/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20077/events
|
https://github.com/huggingface/transformers/pull/20077
| 1,436,734,682
|
PR_kwDOCUB6oc5CPqjW
| 20,077
|
Use huggingface_hub.model_info() to get pipline_tag
|
{
"login": "y-tag",
"id": 387433,
"node_id": "MDQ6VXNlcjM4NzQzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/387433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y-tag",
"html_url": "https://github.com/y-tag",
"followers_url": "https://api.github.com/users/y-tag/followers",
"following_url": "https://api.github.com/users/y-tag/following{/other_user}",
"gists_url": "https://api.github.com/users/y-tag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y-tag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-tag/subscriptions",
"organizations_url": "https://api.github.com/users/y-tag/orgs",
"repos_url": "https://api.github.com/users/y-tag/repos",
"events_url": "https://api.github.com/users/y-tag/events{/privacy}",
"received_events_url": "https://api.github.com/users/y-tag/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR @y-tag!"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR replaces raw HTTP GET with `huggingface_hub.model_info()` to get `pipeline_tag` of model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20077/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20077",
"html_url": "https://github.com/huggingface/transformers/pull/20077",
"diff_url": "https://github.com/huggingface/transformers/pull/20077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20077.patch",
"merged_at": 1667833679000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20076/events
|
https://github.com/huggingface/transformers/pull/20076
| 1,436,652,238
|
PR_kwDOCUB6oc5CPZUg
| 20,076
|
[Minor change] Remove mention of paying subscription
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,692
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Sorry for the two pull requests, I used the online edit function and wasn't sure how to group the two commits into one PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20076/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20076",
"html_url": "https://github.com/huggingface/transformers/pull/20076",
"diff_url": "https://github.com/huggingface/transformers/pull/20076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20076.patch",
"merged_at": 1667597078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20075/events
|
https://github.com/huggingface/transformers/pull/20075
| 1,436,651,358
|
PR_kwDOCUB6oc5CPZIW
| 20,075
|
[Minor change] Remove mention of paying subscription
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,692
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Maybe @sgugger ? I'm not sure
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20075/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20075",
"html_url": "https://github.com/huggingface/transformers/pull/20075",
"diff_url": "https://github.com/huggingface/transformers/pull/20075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20075.patch",
"merged_at": 1667597103000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20074/events
|
https://github.com/huggingface/transformers/pull/20074
| 1,436,640,937
|
PR_kwDOCUB6oc5CPW53
| 20,074
|
Add SpA-Former
|
{
"login": "shivance",
"id": 51750587,
"node_id": "MDQ6VXNlcjUxNzUwNTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/51750587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivance",
"html_url": "https://github.com/shivance",
"followers_url": "https://api.github.com/users/shivance/followers",
"following_url": "https://api.github.com/users/shivance/following{/other_user}",
"gists_url": "https://api.github.com/users/shivance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shivance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivance/subscriptions",
"organizations_url": "https://api.github.com/users/shivance/orgs",
"repos_url": "https://api.github.com/users/shivance/repos",
"events_url": "https://api.github.com/users/shivance/events{/privacy}",
"received_events_url": "https://api.github.com/users/shivance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @shivance do you want to proceed with this?"
] | 1,667
| 1,671
| 1,671
|
NONE
| null |
# What does this PR do?
This PR adds the SpA-Former model to the 🤗 repository.
I also opened an Issue for adding the model https://github.com/huggingface/transformers/issues/19971
# Who can review?
@NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20074/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20074",
"html_url": "https://github.com/huggingface/transformers/pull/20074",
"diff_url": "https://github.com/huggingface/transformers/pull/20074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20074.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20072/events
|
https://github.com/huggingface/transformers/issues/20072
| 1,436,633,539
|
I_kwDOCUB6oc5VoUnD
| 20,072
|
save_pretrained not working correctly when using device_map="auto" for big models in from_pretrained
|
{
"login": "bZehner-git",
"id": 56656856,
"node_id": "MDQ6VXNlcjU2NjU2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/56656856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bZehner-git",
"html_url": "https://github.com/bZehner-git",
"followers_url": "https://api.github.com/users/bZehner-git/followers",
"following_url": "https://api.github.com/users/bZehner-git/following{/other_user}",
"gists_url": "https://api.github.com/users/bZehner-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bZehner-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bZehner-git/subscriptions",
"organizations_url": "https://api.github.com/users/bZehner-git/orgs",
"repos_url": "https://api.github.com/users/bZehner-git/repos",
"events_url": "https://api.github.com/users/bZehner-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/bZehner-git/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report! Using `device_map=\"auto\"` is only for inference, and it indeed does not work with `save_pretrained` yet, especially with offloaded weights.\r\n\r\nWe will look at adding support for this in the future!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.21.3
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.5
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <True>
- Using distributed or parallel set-up in script?: <True>
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
ML_MODEL = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto")
ML_MODEL.save_pretrained("custom_path")
```
1) Load Flan-T5-xxl using from_pretrained with device_map="auto"
In my case the model is loaded to 24 GB GPU RAM (RTX 3090) and the rest of the model ist loaded to the CPU RAM
2) Save Model to custom path using save_pretrained
In my case save_pretrained seems only to save the model information from the GPU RAM. Checking the file sizes in the custom directory only chunk 1 and 2 of the model.bin files have the expected 10GB. Chunk 3 is only partly saved and chunk 4 and 5 have only a few kB.
### Expected behavior
When using from_pretrained without device_map="auto" the model is completely loaded into CPU RAM and also completely saved using save_pretrained. Checking the file sizes in the custom path chunk 1 to 4 have the expected 10GB and chunk5 has the expected 6GB. Same file sizes as in the Transformer Cache Directory. This behaviour is expected when using from_pretrained with device_map="auto".
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20072/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/20072/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20071/events
|
https://github.com/huggingface/transformers/issues/20071
| 1,436,619,026
|
I_kwDOCUB6oc5VoRES
| 20,071
|
Transformer is not compatible with Python 3.11.0
|
{
"login": "donhuvy",
"id": 1328316,
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donhuvy",
"html_url": "https://github.com/donhuvy",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @donhuvy, I don't see anything specific to `transformers`, only to `torch`. How is this related to transformers?",
"It is https://github.com/donhuvy/vy_thesis/blob/main/source/train.py#L22\r\n\r\n```\r\ntokenizer=transformers.AutoTokenizer.from_pretrained(hyps_file[\"encoder\"], use_fast=False),\r\n```\r\nFrom your experience, Are you sure transformer and other of transformer's dependencies work ok with Python 3.11.0?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Is support for Python 3.11 available now?\r\nPlease share details.\r\nThanks",
"As mentioned in the comments above, the problem came from a soft dependency of Transformers (PyTorch). I think they support Python 3.11 now but bet to ask on their repo/forums if you are encountering any issue :-)",
"According to my tests, the latest `huggingface/transformers` installation on Python 3.11 fails because it tries to install `sentencepiece==0.1.97`.\r\n\r\n\r\n\r\n[sentencepiece==0.1.98 is enabled for Python 3.11](https://github.com/google/sentencepiece/issues/810)\r\nCould you please change `huggingface/transformers` requirement to `sentencepiece==0.1.98` ??\r\n",
"We do not pin [`sentencepiece`](https://github.com/huggingface/transformers/blob/ee1eb3b325ce360bbd6c910c1402bca9dfb418f9/setup.py#L165), so the upper bound comes from something in your environment, not Transformers. In fact our CI (which run `pip install transformers[all]`) installs `sentencepiece==0.1.99`."
] | 1,667
| 1,690
| 1,671
|
NONE
| null |
### System Info
```
Microsoft Windows [Version 10.0.22621.674]
(c) Microsoft Corporation. All rights reserved.
C:\Users\donhu>wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
'wget' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\donhu># For security purposes, please check the contents of collect_env.py before running it.
'#' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\donhu>python collect_env.py
python: can't open file 'C:\\Users\\donhu\\collect_env.py': [Errno 2] No such file or directory
C:\Users\donhu>cd d:
D:\
C:\Users\donhu>cd /d D:
D:\>python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: N/A
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 SUPER
Nvidia driver version: 512.77
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
Versions of relevant libraries:
[pip3] numpy==1.23.4
[conda] Could not collect
D:\>
```
### Who can help?
@donhuvy
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use project https://github.com/donhuvy/vy_thesis
Set up project. Transformer is not compatible with Python 3.11.0
### Expected behavior
Transformer is compatible with Python 3.11.0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20071/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20070/events
|
https://github.com/huggingface/transformers/issues/20070
| 1,436,441,515
|
I_kwDOCUB6oc5Vnlur
| 20,070
|
IndexError running ESMFold
|
{
"login": "pstjohn",
"id": 2576846,
"node_id": "MDQ6VXNlcjI1NzY4NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2576846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pstjohn",
"html_url": "https://github.com/pstjohn",
"followers_url": "https://api.github.com/users/pstjohn/followers",
"following_url": "https://api.github.com/users/pstjohn/following{/other_user}",
"gists_url": "https://api.github.com/users/pstjohn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pstjohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pstjohn/subscriptions",
"organizations_url": "https://api.github.com/users/pstjohn/orgs",
"repos_url": "https://api.github.com/users/pstjohn/repos",
"events_url": "https://api.github.com/users/pstjohn/events{/privacy}",
"received_events_url": "https://api.github.com/users/pstjohn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I guess you need to call\r\n```\r\ninputs = tokenizer([\"MLKNVQVQLV\"], return_tensors=\"pt\", add_special_tokens=False)\r\n```\r\ncode example was already updated:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/esm/modeling_esmfold.py#L2103\r\n",
"Thanks @maxjeblick! And yes - the `ESMFold` tokenizer doesn't use special tokens, but the other `ESM` tokenizers do. I'll see if I can set this in the config so that users don't have to keep remembering it, because I kept getting errors from forgetting it too!"
] | 1,667
| 1,668
| 1,667
|
NONE
| null |
### System Info
CentOS 7, transformers-4.25.0.dev0, Python 3.10.6
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running the following example script (added in #20000) is throwing an IndexError for me.
I tried both the `Rocketknight1/esmfold_v1` and `facebook/esmfold_v1` model repositories
```python
from transformers import AutoTokenizer, EsmForProteinFolding
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1")
inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt") # A tiny random peptide
outputs = model(**inputs)
folded_positions = outputs.positions
```
```
Traceback (most recent call last):
File "test_esmfold.py", line 7, in <module>
outputs = model(**inputs)
File "torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "transformers/models/esm/modeling_esmfold.py", line 2121, in forward
esmaa = self.af2_idx_to_esm_idx(aa, attention_mask)
File "transformers/models/esm/modeling_esmfold.py", line 2211, in af2_idx_to_esm_idx
return self.af2_to_esm[aa]
IndexError: index 24 is out of bounds for dimension 0 with size 22
```
### Expected behavior
No IndexError
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20069/events
|
https://github.com/huggingface/transformers/pull/20069
| 1,436,269,547
|
PR_kwDOCUB6oc5COH5y
| 20,069
|
Show installed libraries and their versions
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Similar to #20026, to make this information easier to access.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20069/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20069/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20069",
"html_url": "https://github.com/huggingface/transformers/pull/20069",
"diff_url": "https://github.com/huggingface/transformers/pull/20069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20069.patch",
"merged_at": 1667581398000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20068/events
|
https://github.com/huggingface/transformers/pull/20068
| 1,436,246,558
|
PR_kwDOCUB6oc5CODBR
| 20,068
|
Update documentation on absolute position embed seq2seq models
|
{
"login": "jordiclive",
"id": 44066010,
"node_id": "MDQ6VXNlcjQ0MDY2MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/44066010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordiclive",
"html_url": "https://github.com/jordiclive",
"followers_url": "https://api.github.com/users/jordiclive/followers",
"following_url": "https://api.github.com/users/jordiclive/following{/other_user}",
"gists_url": "https://api.github.com/users/jordiclive/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordiclive/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordiclive/subscriptions",
"organizations_url": "https://api.github.com/users/jordiclive/orgs",
"repos_url": "https://api.github.com/users/jordiclive/repos",
"events_url": "https://api.github.com/users/jordiclive/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordiclive/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
Update documentation on seq2seq models with absolute positional embeddings to be in line with BERT and GPT2.
Issue #19581. For models with absolute positional embeddings, it is not usually a good idea to left-pad as if positional embeddings are not shifted the right amount for each element in the batch the results will not be correct.
Further work may be required to incorporate `position_ids` kwargs (but possibly for both encoder/decoder) for these models similar to BERT and GPT2. However, at the very least the documentation should be updated to be consistent with BERT/GPT2 to provide a warning.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20068/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20068",
"html_url": "https://github.com/huggingface/transformers/pull/20068",
"diff_url": "https://github.com/huggingface/transformers/pull/20068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20068.patch",
"merged_at": 1667575964000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20067/events
|
https://github.com/huggingface/transformers/pull/20067
| 1,436,162,620
|
PR_kwDOCUB6oc5CNxAN
| 20,067
|
Update READMEs for ESMFold and add notebooks
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Woah, PyCharm murdered the formatting on that table. One sec!",
"Formatting fixed now, sorry about that!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This PR adds ESMFold to the main README and adds links to the protein LM and protein folding notebooks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20067/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20067",
"html_url": "https://github.com/huggingface/transformers/pull/20067",
"diff_url": "https://github.com/huggingface/transformers/pull/20067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20067.patch",
"merged_at": 1667574613000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20066/events
|
https://github.com/huggingface/transformers/pull/20066
| 1,436,131,611
|
PR_kwDOCUB6oc5CNqvM
| 20,066
|
Add CLIPSeg
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Is the CLIPSeg yet to be released in the latest version?\r\n"
] | 1,667
| 1,669
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds CLIPSeg, a nice extension of CLIP for zero-shot and one-shot (image-guided) image segmentation.
To do:
- [x] transfer checkpoints and update code
- [x] update base_model_prefix
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20066/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20066",
"html_url": "https://github.com/huggingface/transformers/pull/20066",
"diff_url": "https://github.com/huggingface/transformers/pull/20066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20066.patch",
"merged_at": 1667901347000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20065/events
|
https://github.com/huggingface/transformers/pull/20065
| 1,436,114,389
|
PR_kwDOCUB6oc5CNnEG
| 20,065
|
Update defaults and logic to match old FE
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Updates defaults and logic in image processors to match the previous feature extractors. Fixes some broken inference tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20065/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20065",
"html_url": "https://github.com/huggingface/transformers/pull/20065",
"diff_url": "https://github.com/huggingface/transformers/pull/20065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20065.patch",
"merged_at": 1667589296000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20064/events
|
https://github.com/huggingface/transformers/pull/20064
| 1,435,982,899
|
PR_kwDOCUB6oc5CNK4c
| 20,064
|
[Trainer] Fix model name in push_to_hub
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Trainer's `push_to_hub` fails if `model_name` is specified in the kwargs.
The variable `model_name` is explicitly defined in the `push_to_hub` method:
https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/src/transformers/trainer.py#L3449
And then subsequently passed **alongside** the kwargs to the method `create_model_card`:
https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/src/transformers/trainer.py#L3471
This means if `model_name` is specified in the kwargs, it is passed **twice** to `create_model_card`, once from the variable and once from the kwargs, giving a `TypeError`:
```python
from transformers import Trainer, TrainingArguments, Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained("hf-internal-testing/tiny-random-wav2vec2")
training_args = TrainingArguments(output_dir="dummy_dir_for_issue")
trainer = Trainer(args=training_args, model=model)
trainer.push_to_hub(model_name="pretty-model-name")
```
**Traceback**
```
File ~/transformers/src/transformers/trainer.py:3471, in Trainer.push_to_hub(self, commit_message, blocking, **kwargs)
3469 # push separately the model card to be independant from the rest of the model
3470 if self.args.should_save:
-> 3471 self.create_model_card(model_name=model_name, **kwargs)
3472 try:
3473 self.repo.push_to_hub(
3474 commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True
3475 )
TypeError: create_model_card() got multiple values for keyword argument 'model_name'
```
Fixes #20058.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20064/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20064",
"html_url": "https://github.com/huggingface/transformers/pull/20064",
"diff_url": "https://github.com/huggingface/transformers/pull/20064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20064.patch",
"merged_at": 1667569221000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20063/events
|
https://github.com/huggingface/transformers/pull/20063
| 1,435,895,129
|
PR_kwDOCUB6oc5CM3uj
| 20,063
|
fix(typo): Update README.md
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20063/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20063",
"html_url": "https://github.com/huggingface/transformers/pull/20063",
"diff_url": "https://github.com/huggingface/transformers/pull/20063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20063.patch",
"merged_at": 1667566615000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20062/events
|
https://github.com/huggingface/transformers/pull/20062
| 1,435,723,834
|
PR_kwDOCUB6oc5CMTKQ
| 20,062
|
fix `tokenizer_type` to avoid error when loading checkpoint back
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
1. fix `tokenizer_type` to avoid error when loading checkpoint back
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20062/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20062",
"html_url": "https://github.com/huggingface/transformers/pull/20062",
"diff_url": "https://github.com/huggingface/transformers/pull/20062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20062.patch",
"merged_at": 1667568842000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20061/events
|
https://github.com/huggingface/transformers/pull/20061
| 1,435,715,762
|
PR_kwDOCUB6oc5CMRdC
| 20,061
|
Change constant torch.tensor to torch.full
|
{
"login": "MerHS",
"id": 6321657,
"node_id": "MDQ6VXNlcjYzMjE2NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6321657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MerHS",
"html_url": "https://github.com/MerHS",
"followers_url": "https://api.github.com/users/MerHS/followers",
"following_url": "https://api.github.com/users/MerHS/following{/other_user}",
"gists_url": "https://api.github.com/users/MerHS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MerHS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MerHS/subscriptions",
"organizations_url": "https://api.github.com/users/MerHS/orgs",
"repos_url": "https://api.github.com/users/MerHS/repos",
"events_url": "https://api.github.com/users/MerHS/events{/privacy}",
"received_events_url": "https://api.github.com/users/MerHS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For FX, I think this is already tested in the CI so I guess it does not break things. \r\nFor the ONNX export, it's not tested but it should not break things IMO.",
"Following the change, the training with ONNX Runtime breaks as `mask_value` and `attn_weights` don't have the same dtype after being traced. Will open a PR to fix this issue. \r\n\r\n```\r\n======================================================================\r\nERROR: test_ort_trainer (__main__.TestORTTrainer) (model_name='gpt2', dataset_name='sst2', inference_with_ort=False)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"test_onnxruntime_train.py\", line 131, in test_ort_trainer\r\n train_result = trainer.train()\r\n File \"/workspace/optimum/onnxruntime/trainer.py\", line 349, in train\r\n return inner_training_loop(\r\n File \"/workspace/optimum/onnxruntime/trainer.py\", line 615, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 2523, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/trainer.py\", line 2555, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py\", line 371, in _forward\r\n return ortmodule._torch_module.forward(*inputs, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py\", line 351, in _forward\r\n return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py\", line 273, in forward\r\n self._fallback_manager.handle_exception(\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_fallback.py\", line 162, in handle_exception\r\n raise exception\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py\", line 210, in forward\r\n self._initialize_graph_builder()\r\n File \"/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py\", line 478, in _initialize_graph_builder\r\n self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)\r\nRuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:731 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(float) and tensor(float16) in node (Where_223).\r\n```"
] | 1,667
| 1,670
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Change `torch.tensor` to `torch.full` from GPT-2 to avoid CPU-GPU synchronization.
## Benchmarks with PyTorch Profiler

Here's a trace of a single GPT-2 training iteration with 12 GPT-2 blocks, 2 GPUs, and DDP.
From `_attn` function, there are two `torch.tensor` calls. Those invoke CPU to GPU memory movement, thus calling `cudaStreamSynchronize`.
## How to fix
From [PyTorch Recipes](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html#avoid-unnecessary-cpu-gpu-synchronization), we can avoid CPU-GPU synchronization by directly calling `torch.full` instead of `torch.tensor` or `torch.to`. Since two `torch.tensor` create constant tensors, we can change those into `torch.full([], ...)`, and it will behave the same way.

After the patch, every `cudaStreamSynchronize` is gone, and the duration of a single iteration is reduced by 0.5%.
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20061/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20061",
"html_url": "https://github.com/huggingface/transformers/pull/20061",
"diff_url": "https://github.com/huggingface/transformers/pull/20061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20061.patch",
"merged_at": 1667572916000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20060/events
|
https://github.com/huggingface/transformers/pull/20060
| 1,435,667,890
|
PR_kwDOCUB6oc5CMHa5
| 20,060
|
added-bart-japanese-tokenizer
|
{
"login": "p-s-p-s",
"id": 107598725,
"node_id": "U_kgDOBmnThQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107598725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-s-p-s",
"html_url": "https://github.com/p-s-p-s",
"followers_url": "https://api.github.com/users/p-s-p-s/followers",
"following_url": "https://api.github.com/users/p-s-p-s/following{/other_user}",
"gists_url": "https://api.github.com/users/p-s-p-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-s-p-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-s-p-s/subscriptions",
"organizations_url": "https://api.github.com/users/p-s-p-s/orgs",
"repos_url": "https://api.github.com/users/p-s-p-s/repos",
"events_url": "https://api.github.com/users/p-s-p-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-s-p-s/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for your PR. This custom tokenizer should really go on the Hub in the repos that use it, using our [code on the Hub](https://huggingface.co/docs/transformers/custom_models) feature, instead of adding a new model though.",
"@sgugger Thanks for the advice. I added custom tokenizer code as well as AutoTokenizer support to the model on the Hub.\r\n"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds support for pre-trained BART models for Japanese text.
The original pre-trained model was converted from the Fairseq checkpoint which contains an extra layer_norm layer after the encoder and decoder, therefore compatible with the MBart model. Details of the model can be found [here](https://huggingface.co/Formzu/bart-large-japanese).
Since Japanese language tokenization requires text segmentation, half-width character conversion, as well as special token compatibility with the existing checkpoint, a new tokenizer was implemented.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20060/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20060",
"html_url": "https://github.com/huggingface/transformers/pull/20060",
"diff_url": "https://github.com/huggingface/transformers/pull/20060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20060.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20059/events
|
https://github.com/huggingface/transformers/pull/20059
| 1,435,640,293
|
PR_kwDOCUB6oc5CMBmr
| 20,059
|
Removing RobertaConfig inheritance from CamembertConfig
|
{
"login": "Saad135",
"id": 22683922,
"node_id": "MDQ6VXNlcjIyNjgzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saad135",
"html_url": "https://github.com/Saad135",
"followers_url": "https://api.github.com/users/Saad135/followers",
"following_url": "https://api.github.com/users/Saad135/following{/other_user}",
"gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saad135/subscriptions",
"organizations_url": "https://api.github.com/users/Saad135/orgs",
"repos_url": "https://api.github.com/users/Saad135/repos",
"events_url": "https://api.github.com/users/Saad135/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saad135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Also make sure you run `make style` on your branch to fix the formatting.\r\n\r\n`make style` changed some files in other folders as well related to other models. Since I didn't change them, I didn't add them in this PR, and only added the camembert_configuration.py file with the fixed style.\r\n\r\nI was unsure if I should add style changes in other files in this PR, since this PR is about CamembertConfig."
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes RobertaConfig dependencies from CamembertConfig
Related to https://github.com/huggingface/transformers/issues/19303
@sgugger can I please get some feedback on this. Thanks 😄
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20059/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20059",
"html_url": "https://github.com/huggingface/transformers/pull/20059",
"diff_url": "https://github.com/huggingface/transformers/pull/20059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20059.patch",
"merged_at": 1667829010000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20058/events
|
https://github.com/huggingface/transformers/issues/20058
| 1,435,553,100
|
I_kwDOCUB6oc5VkM1M
| 20,058
|
Push to Hub fails with `model_name`
|
{
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for flagging this @BirgerMoell - should be fixed in the linked PR!",
"Thank you so much for resolving this issue. I managed to push the model to the hub through the script but I still get the original error.\r\n\r\n`Can't load tokenizer using from_pretrained, please update its configuration: Can't load tokenizer for 'birgermoell/whisper-small-sv-test2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'birgermoell/whisper-small-sv-test2' is the correct path to a directory containing all relevant files for a WhisperTokenizer tokenizer.\r\n`\r\n\r\n\r\nHere is the trained test model and you get the error if you try running it either through the pipeline or through the online tool.\r\n\r\nhttps://huggingface.co/birgermoell/whisper-small-sv-test2\r\n\r\nIs there an example fine-tuned whisper model I can look at to check that I have all the right files in my folder?\r\n",
"Okay - great push to Hub works. I wonder why the tokenizer is not saving 🤔 I'll try running your codesnippet! \r\n\r\nHere's an example with all the files: https://huggingface.co/sanchit-gandhi/whisper-small-hi/tree/main",
"This is the code I ran. Identical except that I now use the model name\r\n\r\n```\r\nfrom datasets import load_dataset, DatasetDict\r\n\r\ncommon_voice = DatasetDict()\r\n\r\n#common_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train+validation\", use_auth_token=True)\r\n#common_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test\", use_auth_token=True)\r\n\r\ncommon_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train[:1%]+validation[:1%]\", use_auth_token=True)\r\ncommon_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test[:1%]\", use_auth_token=True)\r\n\r\nprint(common_voice)\r\n\r\ncommon_voice = common_voice.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"path\", \"segment\", \"up_votes\"])\r\n\r\nprint(common_voice)\r\n\r\nfrom transformers import WhisperFeatureExtractor\r\n\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-small\")\r\n\r\nfrom transformers import WhisperTokenizer\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nfrom transformers import WhisperProcessor\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\nfrom datasets import Audio\r\n\r\ncommon_voice = common_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # encode target text to label ids \r\n batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n return batch\r\n\r\ncommon_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=1)\r\n\r\nimport torch\r\n\r\nfrom dataclasses import dataclass\r\nfrom typing import Any, Dict, List, Union\r\n\r\n@dataclass\r\nclass DataCollatorSpeechSeq2SeqWithPadding:\r\n processor: Any\r\n\r\n def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\r\n # split inputs and labels since they have to be of different lengths and need different padding methods\r\n # first treat the audio inputs by simply returning torch tensors\r\n input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\r\n batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\r\n\r\n # get the tokenized label sequences\r\n label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\r\n # pad the labels to max length\r\n labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\r\n\r\n # replace padding with -100 to ignore loss correctly\r\n labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\r\n\r\n # if bos token is appended in previous tokenization step,\r\n # cut bos token here as it's append later anyways\r\n if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\r\n labels = labels[:, 1:]\r\n\r\n batch[\"labels\"] = labels\r\n\r\n return batch\r\n\r\n\"\"\"Let's initialise the data collator we've just defined:\"\"\"\r\n\r\ndata_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)\r\n\r\nimport evaluate\r\n\r\nmetric = evaluate.load(\"wer\")\r\n\r\ndef compute_metrics(pred):\r\n pred_ids = pred.predictions\r\n label_ids = pred.label_ids\r\n\r\n # replace -100 with the pad_token_id\r\n label_ids[label_ids == -100] = tokenizer.pad_token_id\r\n\r\n # we do not want to group tokens when computing the metrics\r\n pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)\r\n\r\n wer = 100 * metric.compute(predictions=pred_str, references=label_str)\r\n\r\n return {\"wer\": wer}\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\n\r\nmodel.config.forced_decoder_ids = None\r\nmodel.config.suppress_tokens = []\r\n\r\nfrom transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-small-sv-test2\", # change to a repo name of your choice\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=500,\r\n max_steps=10,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n)\r\n\r\nfrom transformers import Seq2SeqTrainer\r\n\r\ntrainer = Seq2SeqTrainer(\r\n args=training_args,\r\n model=model,\r\n train_dataset=common_voice[\"train\"],\r\n eval_dataset=common_voice[\"test\"],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n tokenizer=processor.feature_extractor,\r\n)\r\n\r\ntrainer.train()\r\n\r\n\"\"\"Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):\"\"\"\r\n\r\nkwargs = {\r\n \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\r\n \"dataset\": \"Common Voice 11.0\", # a 'pretty' name for the training dataset\r\n \"language\": \"sv\",\r\n \"model_name\": \"WhisperSmallSwedishBirgerMoell\", # a 'pretty' name for our model\r\n \"finetuned_from\": \"openai/whisper-small\",\r\n \"tasks\": \"automatic-speech-recognition\",\r\n \"tags\": \"hf-asr-leaderboard\",\r\n}\r\n\r\ntrainer.push_to_hub(**kwargs)\r\n\r\nfrom transformers import pipeline\r\nimport gradio as gr\r\n\r\npipe = pipeline(model=\"birgermoell/whisper-small-sv-test2\") # change to \"your-username/the-name-you-picked\"\r\n\r\ndef transcribe(audio):\r\n text = pipe(audio)[\"text\"]\r\n return text\r\n\r\niface = gr.Interface(\r\n fn=transcribe, \r\n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \r\n outputs=\"text\",\r\n title=\"Whisper Small SV\",\r\n description=\"Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.\",\r\n)\r\n\r\niface.launch()\r\n```\r\n",
"Great thanks, running on an instance now to try and repro!",
"The issue is that `save_steps` < `max_steps`, so Trainer never gets to the number of steps required to save the checkpoint 😉 If you try with the following it'll work:\r\n```python\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-small-sv-test2\", # change to a repo name of your choice\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=1,\r\n max_steps=10,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=5, # set to < max_steps\r\n eval_steps=5, # set to < max_steps\r\n logging_steps=1, # set to < max_steps\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n)\r\n```\r\nSee https://huggingface.co/sanchit-gandhi/whisper-small-sv-test2/tree/main (I ignored the kwargs so the model card is a bit scratchy, but otherwise the same as your example with the updated training args)",
"The trained model you are linking to https://huggingface.co/sanchit-gandhi/whisper-small-sv-test2/tree/main has the same issue I'm still facing. \r\n\r\nMy guess is that not all the model files are uploaded correctly and when I try running it through the pipeline I get an error.\r\n\r\nIf you compare to the one you trained earlier, they both now get the same error.\r\n\r\nThis is when I tried out the models you trained.\r\nhttps://huggingface.co/sanchit-gandhi/whisper-small-sv-test2\r\n\r\nhttps://huggingface.co/sanchit-gandhi/whisper-small-hi/tree/main\r\n\r\n<img width=\"1406\" alt=\"Screenshot 2022-11-07 at 16 22 13\" src=\"https://user-images.githubusercontent.com/1704131/200347366-36bada17-c19a-4904-9187-0deefeb72899.png\">\r\n",
"Ah I see! Sorry, you're absolutely right! There are files not pushed during training. We need to explicitly save the `processor` as this is not done by Trainer! \r\n\r\nI've updated the notebook: https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb\r\n\r\nAll you need to do is add the line:\r\n```python\r\nprocessor.save_pretrained(training_args.output_dir)\r\n```\r\nbefore calling `trainer.train()`.\r\n\r\nSorry about that, my apologies!",
"Note that the `Trainer` will do it if you pass it `tokenizer=processor` instead of `tokenizer=processor.feature_extractor`.",
"Unfortunately it fails with `model_input_name=...`:\r\nhttps://github.com/huggingface/transformers/blob/d44ac47bac4471703651675c8abd9d6e1b6c3db6/src/transformers/trainer.py#L788\r\nas the processor does not have the attribute `model_input_names` that the `feature_extractor` has.\r\nWill add a PR to fix this tomorrow!",
"Ah, indeed would be nice if the processors had that attribute!",
"Absolutely! Expect a PR tomorrow!",
"```\r\nfrom datasets import load_dataset, DatasetDict\r\n\r\ncommon_voice = DatasetDict()\r\n\r\n#common_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train+validation\", use_auth_token=True)\r\n#common_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test\", use_auth_token=True)\r\n\r\ncommon_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train[:1%]+validation[:1%]\", use_auth_token=True)\r\ncommon_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test[:1%]\", use_auth_token=True)\r\n\r\nprint(common_voice)\r\n\r\ncommon_voice = common_voice.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"path\", \"segment\", \"up_votes\"])\r\n\r\nprint(common_voice)\r\n\r\nfrom transformers import WhisperFeatureExtractor\r\n\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-small\")\r\n\r\nfrom transformers import WhisperTokenizer\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nfrom transformers import WhisperProcessor\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\nfrom datasets import Audio\r\n\r\ncommon_voice = common_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # encode target text to label ids \r\n batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n return batch\r\n\r\ncommon_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=1)\r\n\r\nimport torch\r\n\r\nfrom dataclasses import dataclass\r\nfrom typing import Any, Dict, List, Union\r\n\r\n@dataclass\r\nclass DataCollatorSpeechSeq2SeqWithPadding:\r\n processor: Any\r\n\r\n def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\r\n # split inputs and labels since they have to be of different lengths and need different padding methods\r\n # first treat the audio inputs by simply returning torch tensors\r\n input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\r\n batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\r\n\r\n # get the tokenized label sequences\r\n label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\r\n # pad the labels to max length\r\n labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\r\n\r\n # replace padding with -100 to ignore loss correctly\r\n labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\r\n\r\n # if bos token is appended in previous tokenization step,\r\n # cut bos token here as it's append later anyways\r\n if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\r\n labels = labels[:, 1:]\r\n\r\n batch[\"labels\"] = labels\r\n\r\n return batch\r\n\r\n\"\"\"Let's initialise the data collator we've just defined:\"\"\"\r\n\r\ndata_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)\r\n\r\nimport evaluate\r\n\r\nmetric = evaluate.load(\"wer\")\r\n\r\ndef compute_metrics(pred):\r\n pred_ids = pred.predictions\r\n label_ids = pred.label_ids\r\n\r\n # replace -100 with the pad_token_id\r\n label_ids[label_ids == -100] = tokenizer.pad_token_id\r\n\r\n # we do not want to group tokens when computing the metrics\r\n pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)\r\n\r\n wer = 100 * metric.compute(predictions=pred_str, references=label_str)\r\n\r\n return {\"wer\": wer}\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\n\r\nmodel.config.forced_decoder_ids = None\r\nmodel.config.suppress_tokens = []\r\n\r\nfrom transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-small-sv-test2\", # change to a repo name of your choice\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=1,\r\n max_steps=10,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=5, # set to < max_steps\r\n eval_steps=5, # set to < max_steps\r\n logging_steps=1, # set to < max_steps\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n)\r\n\r\nfrom transformers import Seq2SeqTrainer\r\n\r\ntrainer = Seq2SeqTrainer(\r\n args=training_args,\r\n model=model,\r\n train_dataset=common_voice[\"train\"],\r\n eval_dataset=common_voice[\"test\"],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n tokenizer=processor.feature_extractor,\r\n)\r\nprocessor.save_pretrained(training_args.output_dir)\r\n\r\ntrainer.train()\r\n\r\n\"\"\"Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):\"\"\"\r\n\r\nkwargs = {\r\n \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\r\n \"dataset\": \"Common Voice 11.0\", # a 'pretty' name for the training dataset\r\n \"language\": \"sv\",\r\n \"model_name\": \"whisper-small-sv-test2\", # a 'pretty' name for our model\r\n \"finetuned_from\": \"openai/whisper-small\",\r\n \"tasks\": \"automatic-speech-recognition\",\r\n \"tags\": \"hf-asr-leaderboard\",\r\n}\r\n\r\ntrainer.push_to_hub(**kwargs)\r\n\r\nfrom transformers import pipeline\r\nimport gradio as gr\r\n\r\npipe = pipeline(model=\"birgermoell/whisper-small-sv-test2\") # change to \"your-username/the-name-you-picked\"\r\n\r\ndef transcribe(audio):\r\n text = pipe(audio)[\"text\"]\r\n return text\r\n\r\niface = gr.Interface(\r\n fn=transcribe, \r\n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \r\n outputs=\"text\",\r\n title=\"Whisper Small SV\",\r\n description=\"Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.\",\r\n)\r\n\r\niface.launch()\r\n\r\n```\r\n\r\nIt worked. Here is working code and a working test model here.\r\nhttps://huggingface.co/birgermoell/whisper-small-sv-test2\r\n\r\n",
"That's great @BirgerMoell 🥳 Excited to see what the full training runs bring!",
"The full model training also worked :D\r\nhttps://huggingface.co/birgermoell/whisper-small-sv-bm",
"Awesome! 19.6% is pretty good! You can deffo try training for longer and a bigger model checkpoint. Feel free to post updates on the forum https://discuss.huggingface.co"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
"""Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"""
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
from transformers import pipeline
import gradio as gr
pipe = pipeline(model="birgermoell/whisper-small-sv-test2") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
iface = gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
title="Whisper Small SV",
description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
)
iface.launch()
```
### Expected behavior
The following script is a downloaded version of the colab notebook that follows the whisper fine-tuning tutorial.
https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
One edit was that I removed the model name since I had an issue that it was complaining about two model names that made it impossible to upload. The script just runs on 1% of the dataset on 10 epochs.
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
https://huggingface.co/birgermoell/whisper-small-sv-test2
I also ran into similar issues when I trained a model on the whole dataset.
https://huggingface.co/birgermoell/whisper-small-sv
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20058/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20057/events
|
https://github.com/huggingface/transformers/issues/20057
| 1,435,505,970
|
I_kwDOCUB6oc5VkBUy
| 20,057
|
Timestamps in Whisper processor
|
{
"login": "JeffreyWardman",
"id": 23271678,
"node_id": "MDQ6VXNlcjIzMjcxNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/23271678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeffreyWardman",
"html_url": "https://github.com/JeffreyWardman",
"followers_url": "https://api.github.com/users/JeffreyWardman/followers",
"following_url": "https://api.github.com/users/JeffreyWardman/following{/other_user}",
"gists_url": "https://api.github.com/users/JeffreyWardman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeffreyWardman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeffreyWardman/subscriptions",
"organizations_url": "https://api.github.com/users/JeffreyWardman/orgs",
"repos_url": "https://api.github.com/users/JeffreyWardman/repos",
"events_url": "https://api.github.com/users/JeffreyWardman/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeffreyWardman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @sanchit-gandhi or @ArthurZucker ",
"Related to #19887, in which timestamps for Whisper were discussed. Is this on your timeline @ArthurZucker as part of the Whisper integration? Otherwise I'll add it to my TODO's!",
"Hey, really sorry for being so late. Will focus on that next week! I'll ping you once a draft PR is ready! 🤗 ",
"BTW, you can already have the `timestamp` generation using the model : \r\n```python \r\n\r\n\r\n```\r\n```\r\ntensor([[50258, 50265, 50359, 50364, 1456, 1804, 1021, 871, 368, 635,\r\n 32400, 368, 635, 32400, 1030, 4666, 2795, 70, 3201, 339,\r\n 892, 1531, 287, 311, 68, 368, 10384, 2023, 20071, 13,\r\n 50639, 50257]])\r\n``` Where the timestamp tokens are `>50363`. You can also use a custom logit processor to be sure that they are correctly generated. \r\n\r\nMoreover, the original paper used a simple rule that associates 0.02 seconds to each tokens, which means that without removing the special tokens you can already get the per_word timestamps. 😉 ",
"@ArthurZucker can you provide an example of how to get the timestamp tokens like above with `WhisperForConditionalGeneration` by any chance?",
"Of course. BTW it is included in[ this notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=Ca4YYdtATxzo)\r\n```python \r\nfrom datasets import load_dataset\r\nfrom transformers import WhisperForConditionalGeneration, WhisperProcessor\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\").to(device)\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-small\")\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\naudio_sample = ds[3]\r\nspeech_data = audio_sample[\"audio\"][\"array\"]\r\nspeech_file = audio_sample[\"file\"] # used as an example for the pipeline\r\ninputs = processor.feature_extractor(speech_data, return_tensors=\"pt\", sampling_rate=16_000).input_features\r\ngenerate_ids = model.generate(inputs, return_timestamps=True, task=\"translate\")\r\nprint(generate_ids)\r\n```\r\n```python\r\ntensor([[50258, 50266, 50358, 50364, 634, 575, 12525, 22618, 1968, 6144,\r\n 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439,\r\n 293, 50676, 50676, 393, 4411, 294, 309, 457, 707, 295,\r\n 33301, 286, 392, 6628, 13, 50836, 50257]])\r\n>>> processor.tokenizer.decode(generate_ids[0], decode_with_timestamps=True)\r\n<|startoftranscript|><|ja|><|translate|><|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|><|endoftext|>\r\n```",
"Great. Thanks! I had a quick test and for this: `each` (1123) occurs at 7.0 seconds in the audio. However with each token representing 0.02s you can see it's at 0.34s. So it doesn't look like using the tokens like so cannot find breaks at a per word level.\r\n```\r\ntensor([[50257, 50363, 1649, 257, 3440, 1332, 318, 2067, 11, 281,\r\n 4554, 286, 257, 2836, 1398, 481, 307, 2727, 329, 1123,\r\n 50681, 50681,\r\n```",
"That is not exactly the way to compute the time. You should be using the `tokenizer.decode(..., output_offset = True)`.\r\nAlso the 0.02s rule is to convert from a timestamp token to time. Here the end is at `50681`, you substract the timestamp begin so you have `50681 - 50363 = 318` which you then multiply by `0.02`, you get `6.36`s. "
] | 1,667
| 1,676
| 1,667
|
NONE
| null |
### Feature request
output_word_offsets argument in Whisper's processor.decode() function.
I want to get the timestamp of the start and end of each word.
### Motivation
I cannot use Whisper until it accommodates for word timestamps and long audio.
### Your contribution
With guidance, happy to submit it but will need guidance. Can do this in a month's time.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20057/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20056/events
|
https://github.com/huggingface/transformers/issues/20056
| 1,435,378,109
|
I_kwDOCUB6oc5VjiG9
| 20,056
|
Unable to load CodeGenTokenizer
|
{
"login": "PtrMan",
"id": 1067920,
"node_id": "MDQ6VXNlcjEwNjc5MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1067920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PtrMan",
"html_url": "https://github.com/PtrMan",
"followers_url": "https://api.github.com/users/PtrMan/followers",
"following_url": "https://api.github.com/users/PtrMan/following{/other_user}",
"gists_url": "https://api.github.com/users/PtrMan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PtrMan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PtrMan/subscriptions",
"organizations_url": "https://api.github.com/users/PtrMan/orgs",
"repos_url": "https://api.github.com/users/PtrMan/repos",
"events_url": "https://api.github.com/users/PtrMan/events{/privacy}",
"received_events_url": "https://api.github.com/users/PtrMan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should double-check that the version of Transformers seen when executing your code is indeed a source install (you can print `transformers.__version__` after importing transformers), as it looks like an issue within your env.",
"Thx!"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
I did `pip install git+https://github.com/huggingface/transformers.git`
try to load the class with `from transformers import CodeGenTokenizer`
results in
```
ImportError: cannot import name 'CodeGenTokenizer' from 'transformers' (/usr/local/lib/python3.9/dist-packages/transformers/__init__.py)
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import CodeGenTokenizer
### Expected behavior
load class
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20056/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20055/events
|
https://github.com/huggingface/transformers/issues/20055
| 1,435,378,021
|
I_kwDOCUB6oc5VjiFl
| 20,055
|
Model resources contribution
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 3551105283,
"node_id": "LA_kwDOCUB6oc7TqZED",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Documentation%20Issue",
"name": "Good First Documentation Issue",
"color": "AB0BA8",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"Hi @stevhliu, I want to work on OpenAI GPT!",
"Awesome! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗",
"@stevhliu \r\nI have a question. Is there a good way to search GitHub and blog posts? I tried to find related repos and blog posts with the word `OpenAI GPT` but I couldn't find them because search function doesn't seem to work well... Should I search one by one repo or post?\r\n\r\nI made a draft pull request although it doesn't have links of GitHub and blog. You can check it to see if my research has been good or not\r\nhttps://github.com/huggingface/transformers/pull/20084",
"Hey @shogohida, thanks for starting on this! \r\n\r\nThe easiest way I've found for searching the blog posts is to go to the blog [repo](https://github.com/huggingface/blog) and search for mentions of `GPT` inside the repo. Then you can take a look at the results and see what's relevant!\r\n\r\nFor GitHub materials, you only have to look at the example scripts, and notebooks and *see what task* your model can be applied to. For example, `OpenAI GPT` is a casual language model, so you can link to example scripts for [causal language modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and also [text generation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation#language-generation). You can link the equivalent scripts in [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow) and [Flax](https://github.com/huggingface/transformers/tree/main/examples/flax) if they're available.\r\n\r\nAfter the scripts, you can hop over to the [notebooks](https://github.com/huggingface/transformers/tree/main/notebooks) and see what task your model can be applied to (language modeling, generate text) and do the same thing for the [community notebooks](https://huggingface.co/docs/transformers/community)!",
"@stevhliu \r\nThanks for your comment! It will take a lot of time to collect resources from scripts and notebooks because I'm not very familiar with OpenAI GPT but I'll do my best. I'll let you know if I have another question",
"Hi, I would like to take CLIP from the list you have mentioned. :)",
"That's great @ambujpawar! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗\r\n\r\n",
"@stevhliu I would like to work on DeBERTa",
"Great, thanks for taking on DeBERTa @Saad135! 🤗",
"Hello, do you mind if I can tackle on ALBERT model? @stevhliu ",
"For sure, looking forward to your contribution @JuheonChu! 🤗",
"Hi! Could I try ViT? It might take me some time though as have some work projects to complete too.",
"Hi, I would like to work on XLM-RoBERTa! @stevhliu",
"Hey @stanleycai95, that would be great! Feel free to work on it when you have the time :)\r\n\r\nAwesome, XLM-RoBERTa is all yours @hazrulakmal!",
"Hi, I would like to work on GPT-J! @stevhliu ",
"Yay thanks for taking on GPTJ @adit299! Let me know if you have any questions or need any help 🤗 ",
"Hi, could I work on OPT? :) @stevhliu",
"OPT is all yours @alissadb! 🤩 ",
"Let me round out the list @stevhliu . TrOCR",
"Awesome, thanks for finishing this off @Laxmaan! 🎉 ",
"Hello @stevhliu . I'd love to contribute in documentation. I see all models are assigned, is there any other I can help with?\r\nThank you 😊",
"Hi @elabongaatuo, sorry for the late reply and thanks for your enthusiasm! \r\n\r\nI think we are good with the model resource contributions for now. If you're looking for ways to contribute to the docs, feel free to open an issue for improving the docs (content that is unclear, missing, or inaccurate or fixing typos) and we can review it there. For more info about getting started with contributing, take a look at this [guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md)! 🤗",
"Hello @stevhliu . Thanks for getting back to me. I'll be on the lookout for docs that need improving. ",
"Hi @JuheonChu and @Laxmaan, I wanted to check and see if you're still interested in making a model contribution. Totally cool if you aren't available anymore, I'll unassign the models you claimed and let others take a shot at it. Thanks! ",
"Hi @stevhliu, I'd like to take a shot at one of the models if one of them becomes unassigned. Please let me know!",
"Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!",
"Hello @stevhliu. I'd like to take up ALBERT.",
"> Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!\r\n\r\nI’d like to take TrOCR!",
"All yours! Happy contributing and feel free to let me know if you have any questions! 🤗",
"> Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!\r\n\r\nHello!! @stevhliu I don't have any option I guess 😅. LayoutLMV2 for me then 🌏."
] | 1,667
| 1,703
| 1,700
|
MEMBER
| null |
Hi friends! 👋
There are a lot of cool existing resources for how to do *x* with *x* model, and we’d like to showcase and aggregate these resources on a model’s documentation. This’ll help users see how they can get started with a model for their own tasks since we know a lot of users check out the model documentation first. Take a look at a completed [resource section](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#resources) for DistilBERT as an example.
I’ve identified the top 20 models by pageviews, and now I’d like to open it up to the community if anyone is interested in helping!
Anyone can contribute; you just need to comment and claim one of the models on this [list](https://github.com/huggingface/transformers/issues/19848). Contributing is super easy:
1. Once you've claimed a model from the list, collect the existing resources from:
- the Hugging Face [blog](https://huggingface.co/blog)
- relevant materials from the 🤗 Hugging Face [Course](https://huggingface.co/course/chapter1/1)
- the Hugging Face [example scripts](https://github.com/huggingface/transformers/tree/main/examples) and [notebooks](https://github.com/huggingface/transformers/tree/main/notebooks)
- @NielsRogge's Transformers Tutorials [repository](https://github.com/NielsRogge/Transformers-Tutorials)
- @philschmid's [blog](https://www.philschmid.de/)
- [notebooks](https://huggingface.co/docs/transformers/community) from the community ❤️
2. Organize the resources by model tasks or applications (like inference or deployment):
- Use the corresponding icons for each task (you can find the names for each icon [here](https://github.com/huggingface/doc-builder/blob/19ba9da2556294f1777c865793d13e9ea47f8716/kit/src/lib/PipelineIcon.svelte#L42-L71)):
```
<PipelineTag pipeline=”name-of-task”/>
```
- For certain categories, you can just do: 🚀 Deploy, ⚡️ Inference, or ⚗️ Optimization, etc.
- For community resources, add the 🌎 emoji at the end to indicate it’s not an official Hugging Face resource.
- Use this DistilBERT [file](https://github.com/huggingface/transformers/pull/19930/files) as a template. You can copy and paste the intro text and just replace DistilBERT with the name of the model you're working on.
3. Open a Pull Request with the new resources for your chosen model and ping me for a review (if you’re just getting started with contributing to an open-source project, check out @merveenoyan's awesome [GitHub Contribution Guide](https://www.notion.so/19411c29298644df8e9656af45a7686d)).
4. Congratulations, you just merged a PR into 🤗 Transformers, and your contribution will now help anyone who is looking at the model docs! 🎉
If you have any questions or need any help, don’t hesitate to ping me! 🤗❤️
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20055/reactions",
"total_count": 12,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20055/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20054/events
|
https://github.com/huggingface/transformers/pull/20054
| 1,435,356,666
|
PR_kwDOCUB6oc5CLF9x
| 20,054
|
Adapt PerceiverIO Multimodal class to work with arbitrary modalities
|
{
"login": "stevenmanton",
"id": 3666725,
"node_id": "MDQ6VXNlcjM2NjY3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3666725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevenmanton",
"html_url": "https://github.com/stevenmanton",
"followers_url": "https://api.github.com/users/stevenmanton/followers",
"following_url": "https://api.github.com/users/stevenmanton/following{/other_user}",
"gists_url": "https://api.github.com/users/stevenmanton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevenmanton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevenmanton/subscriptions",
"organizations_url": "https://api.github.com/users/stevenmanton/orgs",
"repos_url": "https://api.github.com/users/stevenmanton/repos",
"events_url": "https://api.github.com/users/stevenmanton/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevenmanton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger can you take a look at this PR? I'm happy to make any modifications to get this through. These changes are required to use the multimodal classes on modalities other than those that are currently hard-coded.",
"Thanks for your PR! Could you clarify which errors you run into with the current implementation that would be solved with this PR?",
"The biggest issue I had was that the signature of `forward` for the various Preprocessors isn't consistent. This mean I couldn't use the `TextPreprocessor` within the `PerceiverMultimodalPreprocessor` class. There were a couple other issues as well (e.g. `dict` instead of `ModuleDict`.) I'm happy to take feedback to improve this PR. Thanks for all your help!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge any thoughts on these improvements?",
"Hi @NielsRogge, just checking in on this PR. I know you're probably super busy, so is there something I can do to make the review easier for you? I'm very happy to do what I can to incorporate feedback. I have additional changes I'd like to make (mostly around type hints), but I'm hoping to make these initial fixes first, which are more critical. Please let me know how I can help! Thanks again for all your work.",
"Hi, sorry for the late reply here. I'll take a look tomorrow ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@amyeroberts Could you have a look?",
"@stevenmanton I can see that some of the tests failing are unrelated to this PR. Can you rebase from main to make sure all upstream changes are included? ",
"Thanks for your contribution!"
] | 1,667
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
The current codebase is excellent, but the multimodal classes are tightly coupled to the example in the paper where the modalities are video, audio, and binary class labels. This PR makes a few small changes to support arbitrary modalities, such as text and image.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
I'm not an experienced committer to this repo, so I'm very happy to take direction. My hope is to share the improvements I made to a wider audience and extend an already awesome package.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20054/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20054",
"html_url": "https://github.com/huggingface/transformers/pull/20054",
"diff_url": "https://github.com/huggingface/transformers/pull/20054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20054.patch",
"merged_at": 1676584260000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20053/events
|
https://github.com/huggingface/transformers/issues/20053
| 1,435,308,088
|
I_kwDOCUB6oc5VjRA4
| 20,053
|
Is there a no_trainer version for image pertaining
|
{
"login": "XZhang97666",
"id": 91291808,
"node_id": "MDQ6VXNlcjkxMjkxODA4",
"avatar_url": "https://avatars.githubusercontent.com/u/91291808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XZhang97666",
"html_url": "https://github.com/XZhang97666",
"followers_url": "https://api.github.com/users/XZhang97666/followers",
"following_url": "https://api.github.com/users/XZhang97666/following{/other_user}",
"gists_url": "https://api.github.com/users/XZhang97666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XZhang97666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XZhang97666/subscriptions",
"organizations_url": "https://api.github.com/users/XZhang97666/orgs",
"repos_url": "https://api.github.com/users/XZhang97666/repos",
"events_url": "https://api.github.com/users/XZhang97666/events{/privacy}",
"received_events_url": "https://api.github.com/users/XZhang97666/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"@NielsRogge",
"Not yet, but it would be straightforward to add.\r\n\r\nMarking this as a good first issue.",
"Hi @NielsRogge, I would like to try to add it.",
"Hi @atturaioe, awesome.\r\n\r\nSo in [this folder](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining), one could add a `run_mim_no_trainer.py` script, similar to the other `no_trainer.py` scripts in the examples folder.",
"Hi @atturaioe, are you still working on this? I would like to attempt this if you are not longer working on it.",
"Hi @Saad135, you can take this issues if you want, let me know.\r\n",
"@atturaioe Sure, I will give it a go.",
"Is anyone still working on this? If not I'd quite like to pick it up. \r\n\r\nI see from the PR that most of the work has been done but there's not been any acitivity on it recently 🤔 ",
"@madt2709 I am working on it. Most of the work has been completed by @Saad135. His PR #20053 has been closed due to inactivity. I have taken it over in PR #23156 to complete it.\r\n\r\n"
] | 1,667
| 1,683
| 1,683
|
NONE
| null |
### Feature request
I wonder if there is a run_no_trainer code for image pertaining? @NielsRogge
### Motivation
I usually use no trainer in my code because the trainer cannot should much detail.
### Your contribution
NA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20053/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20052/events
|
https://github.com/huggingface/transformers/pull/20052
| 1,435,166,278
|
PR_kwDOCUB6oc5CKdhp
| 20,052
|
Implement tf big bird port
|
{
"login": "E-Aho",
"id": 46936677,
"node_id": "MDQ6VXNlcjQ2OTM2Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/46936677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/E-Aho",
"html_url": "https://github.com/E-Aho",
"followers_url": "https://api.github.com/users/E-Aho/followers",
"following_url": "https://api.github.com/users/E-Aho/following{/other_user}",
"gists_url": "https://api.github.com/users/E-Aho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/E-Aho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/E-Aho/subscriptions",
"organizations_url": "https://api.github.com/users/E-Aho/orgs",
"repos_url": "https://api.github.com/users/E-Aho/repos",
"events_url": "https://api.github.com/users/E-Aho/events{/privacy}",
"received_events_url": "https://api.github.com/users/E-Aho/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20052). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Solves #19430 by implementing BigBird in Tensorflow
I had an earlier demo PR up, but this one will be the main PR I use to hopefully get this merged in once it's all working :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Tagging @Rocketknight1 as they were kind enough to help me with this already with an issue I was running into with outputting attention weights using strided slices, but would appreciate any inputs from anyone!
Most of the tests are working as expected now, I am still running into a couple issues with two or three tests, that I need to look into:
1) Issue with TFAutoModel in test not raising a `ValueError` in one of the tests for mismatching sizes (but the `TFAutoModelForSequenceClassification` is working in this test 🤔).
2) Issue persisting and loading (`save_load` and `keras_load` tests aren't working fully yet)
I'm going to try and work on this some more this weekend, if anyone has any insights I'd love to know :D
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20052/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20052",
"html_url": "https://github.com/huggingface/transformers/pull/20052",
"diff_url": "https://github.com/huggingface/transformers/pull/20052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20052.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20051/events
|
https://github.com/huggingface/transformers/pull/20051
| 1,435,094,048
|
PR_kwDOCUB6oc5CKODG
| 20,051
|
Add new terms to the glossary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This PR adds some new terms related to computer vision and speech to the glossary, feel free to let me know if I'm missing any you think are important that would help users better understand the docs!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20051/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20051",
"html_url": "https://github.com/huggingface/transformers/pull/20051",
"diff_url": "https://github.com/huggingface/transformers/pull/20051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20051.patch",
"merged_at": 1667846728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20050/events
|
https://github.com/huggingface/transformers/pull/20050
| 1,435,064,797
|
PR_kwDOCUB6oc5CKHxO
| 20,050
|
Generate: TF contrastive search with XLA support
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(@Rocketknight1 is off, so I'm merging this to not slow down the corresponding blog post, which will contain TF examples thanks to this PR :D In any case, have a quick look when you're back, to ensure we kill any bad pattern before v4.25 gets released!)"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
# What does this PR do?
Adds contrastive search to TF, with XLA support.
In essence, TF's contrastive search is very similar to PT's, adapted to the structure that is present in other TF XLA generation functions (i.e. has a dedicated function to loop over, a separate function to update `model_kwargs` when in XLA mode, ...). The most notable difference is how the best candidate token (and associated model variables) are gathered -- PT relies on slicing, which TF doesn't support, so a `tf.gather` alternative is used.
The exact same integration tests (with the same input, model, and outputs) were added whenever possible. Three integration tests were not added, which will be addressed in a follow-up PR:
1. [GPT-J](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/gptj/test_modeling_gptj.py#L577) -- PT's test runs at half precision, for which we don't have the same TF facilities
2. [OPT](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/opt/test_modeling_opt.py#L495) -- OPT is not XLA compatible atm (it runs, but the position embeddings are wrong with padded structures, so we get different outputs)
3. [T5](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/t5/test_modeling_t5.py#L1227) -- the model used for this test do not have TF weights
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20050/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20050/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20050",
"html_url": "https://github.com/huggingface/transformers/pull/20050",
"diff_url": "https://github.com/huggingface/transformers/pull/20050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20050.patch",
"merged_at": 1667818470000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20049/events
|
https://github.com/huggingface/transformers/pull/20049
| 1,435,052,403
|
PR_kwDOCUB6oc5CKFF9
| 20,049
|
updating the warmup_ratio from 0.1 to 0.2
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@lvwerra - what should I do next?",
"Closing this - just used this to show how to make a PR for the transformers reading group! Sorry for the unnecessary pings :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20049). All of your documentation changes will be reflected on that endpoint.",
"Please use personal forks of the repo when doing demos in the future. In this PR you pinged directly 15 people and also added an unnecessary notification for all of those who watched the repo."
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lvwerra
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20049/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20049",
"html_url": "https://github.com/huggingface/transformers/pull/20049",
"diff_url": "https://github.com/huggingface/transformers/pull/20049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20049.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20048/events
|
https://github.com/huggingface/transformers/pull/20048
| 1,435,050,738
|
PR_kwDOCUB6oc5CKEu5
| 20,048
|
PoolformerImageProcessor defaults to match previous FE
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Output of PoolformerImageProcessor didn't exactly match previous feature extractor. Updates defaults and size calculation logic to match outputs.
Running:
```
import torch
from transformers import PoolFormerFeatureExtractor, PoolFormerModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = PoolFormerFeatureExtractor.from_pretrained("sail/poolformer_s12")
model = PoolFormerModel.from_pretrained("sail/poolformer_s12")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
r = list(last_hidden_states.shape)
print(r)
```
Now outputs images of size `[1, 512, 7, 7]` - matching the output of the images before #19796 was merged in.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20048/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20048",
"html_url": "https://github.com/huggingface/transformers/pull/20048",
"diff_url": "https://github.com/huggingface/transformers/pull/20048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20048.patch",
"merged_at": 1667569978000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20047/events
|
https://github.com/huggingface/transformers/issues/20047
| 1,434,979,562
|
I_kwDOCUB6oc5ViAzq
| 20,047
|
pipeline("summarization") is extractive vs abstractive?
|
{
"login": "km5ar",
"id": 54015474,
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/km5ar",
"html_url": "https://github.com/km5ar",
"followers_url": "https://api.github.com/users/km5ar/followers",
"following_url": "https://api.github.com/users/km5ar/following{/other_user}",
"gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/km5ar/subscriptions",
"organizations_url": "https://api.github.com/users/km5ar/orgs",
"repos_url": "https://api.github.com/users/km5ar/repos",
"events_url": "https://api.github.com/users/km5ar/events{/privacy}",
"received_events_url": "https://api.github.com/users/km5ar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
pipeline("summarization") is extractive vs abstractive?
# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
### Expected behavior
pipeline("summarization") is extractive vs abstractive? there is no mention about it on the official documentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20047/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20046/events
|
https://github.com/huggingface/transformers/issues/20046
| 1,434,955,075
|
I_kwDOCUB6oc5Vh61D
| 20,046
|
Multilabel, multiclass models with >2 classes per label using BCELoss instead of CategoricalCrossentropy
|
{
"login": "imatiach-msft",
"id": 24683184,
"node_id": "MDQ6VXNlcjI0NjgzMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/24683184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imatiach-msft",
"html_url": "https://github.com/imatiach-msft",
"followers_url": "https://api.github.com/users/imatiach-msft/followers",
"following_url": "https://api.github.com/users/imatiach-msft/following{/other_user}",
"gists_url": "https://api.github.com/users/imatiach-msft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imatiach-msft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imatiach-msft/subscriptions",
"organizations_url": "https://api.github.com/users/imatiach-msft/orgs",
"repos_url": "https://api.github.com/users/imatiach-msft/repos",
"events_url": "https://api.github.com/users/imatiach-msft/events{/privacy}",
"received_events_url": "https://api.github.com/users/imatiach-msft/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Also, just thinking about correctness, the model could possibly give both Dog and Cat as an answer when using BCEWithLogitsLoss since it treats them as independent labels during fine-tuning after one hot encoding. This seems wrong to me. Perhaps I am missing something here? We should just use multiple CategoricalCrossentropy losses per label here both for correctness and to make it easier for the user to specify the labels to the model.",
"You are mistaking what we call a \"multi-label\" problem. A multi-label problem means one input can have zero, one or multiple labels. For instance the model could give both Dog and Cat as an answer because there might be both in the input.\r\n\r\nFor a model that predicts several categories of outputs, you will probably need to write your own head.",
"I guess I'm perhaps getting confused by the graph from scikit-learn here:\r\n\r\nhttps://scikit-learn.org/stable/modules/multiclass.html?highlight=multilabel\r\n\r\nmultilabel classification is under sklearn.multioutput. Perhaps I've heard the terms multilabel and multioutput used interchangeably too many times and I'm getting confused by that. Also perhaps in tabular vs text contexts those terms may be used in slightly different ways.\r\nI see now that the intention isn't to support this scenario in huggingface, so perhaps I can close this issue. This would be a more advanced scenario/feature where there can be multiple outputs but some are grouped together. There are also hierarchical models that output some top-level label as one class and then under that more specific labels as another class, thinking of the https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes dataset here -- which in the description says \"excellent benchmark for hierarchical multiclass/multilabel text classification\", but perhaps it should call it multioutput since you would want the model to output all l1/l2/l3 labels with one of the multiple classes for each. That is also similar to the hierarchical multilabel + multioutput + multiclass models some customers I have worked with recently used, although maybe if the model is supposed to output multiple labels like this where the labels are not independent \"multilabel\" is not the correct term then and only \"multioutput\" should be used.",
"I guess this issue is more of a feature request than a bug report to support \"multioutput\" instead of the current \"multilabel\" text classification scenario. But this feature might be so niche and specific that it's not really worth it to implement in huggingface as another text classification parameter, so I'll just close it here."
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
latest transformers
### Who can help?
@sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The PR here by @sgugger adds multilabel classification support with the BCEWithLogitsLoss:
https://github.com/huggingface/transformers/pull/14180/files
However, in the case where we have multiple labels, and the labels have multiple classes (or a mix of binary with multiple classes), it seems a bit strange to me to use BCEWithLogitsLoss and to force the user to one-hot-encode the labels with multiple classes during fine-tuning, which is the only way to make this work. Instead, we should in that case just use CategoricalCrossentropy like in this tutorial for each of the labels:
https://towardsdatascience.com/multi-label-multi-class-text-classification-with-bert-transformer-and-keras-c6355eccb63a
Perhaps I missed something, but it seems better to me to allow the user to specify the labels without reformatting then and specify multiple CategoricalCrossentropy losses than to force them to reformat the labels into a binary one-hot-encoding for all classes across multiple labels.
For example, if the user had a dataset like:
Text | Animal | Sound
Likes to fetch | Dog | Woof
Likes to sleep | Cat | Meow
Likes to fly | Bird | Chirp
My understanding is that currently we have to one-hot-encode the labels into the following to make it work with BCEWithLogitsLoss which is used in the implementation:
Dog | Cat | Bird | Woof | Meow | Chirp
1 | 0 | 0 | 1 | 0 | 0
0 | 1 | 0 | 0 | 1 | 0
0 | 0 | 1 | 0 | 0 | 1
I think it might be useful instead to just allow the user to pass in the labels directly and use CategoricalCrossentropy. Why is that currently not possible?
### Expected behavior
Allow the user to pass in the labels without creating a one-hot-encoded matrix for multiclass, multilabel scenario where there are >2 classes per label
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20046/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20045/events
|
https://github.com/huggingface/transformers/pull/20045
| 1,434,916,863
|
PR_kwDOCUB6oc5CJnzb
| 20,045
|
Fix ESM LM head test
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Unfortunately I deleted and remade the repos, so there's no commit to point at! I'll try to do that in future though."
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
The original ESM-2 checkpoints had a bug that meant the LM head bias was not saved correctly. Now this has been fixed, we need to update our LM test as well.
cc @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20045/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20045",
"html_url": "https://github.com/huggingface/transformers/pull/20045",
"diff_url": "https://github.com/huggingface/transformers/pull/20045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20045.patch",
"merged_at": 1667565934000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20044/events
|
https://github.com/huggingface/transformers/pull/20044
| 1,434,859,714
|
PR_kwDOCUB6oc5CJbmX
| 20,044
|
Allow passing arguments to model testers for CLIP-like models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"`GroupViT`, `OwlViT`, `XCLIP`.\r\n\r\n`Flava` too, but with name `image_model_tester` instead of `vision_model_tester`.\r\n\r\n\r\n\r\n"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
This is a continuation of PR #19954, but for model likes `CLIP`. Currently for such models, we have
```python
class CLIPModelTester:
def __init__(self, parent, is_training=True):
self.parent = parent
self.text_model_tester = CLIPTextModelTester(parent)
self.vision_model_tester = CLIPVisionModelTester(parent)
```
and there is no way to pass any argument to the 2 component testers.
If this POC is approved, I will work on other models like `GroupViT`, `OwlViT`, `XCLIP` etc.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20044/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20044",
"html_url": "https://github.com/huggingface/transformers/pull/20044",
"diff_url": "https://github.com/huggingface/transformers/pull/20044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20044.patch",
"merged_at": 1667581302000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20043/events
|
https://github.com/huggingface/transformers/pull/20043
| 1,434,858,577
|
PR_kwDOCUB6oc5CJbXA
| 20,043
|
Only resize embeddings when necessary
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
As seen in #19959, using our examples with models where the embedding size is larger than the tokenizer for padding reasons (to get embedding dim a multiple of a give number like 8 or 128), the fine-tuned models become incompatible with the original model.
This has confused a lot of users, just to support a full pretraining example. This is why this PR proposes to only resize the embeddings when the tokenizer has more tokens. This might in turn confuse users that do a full pretraining on a small vocab and expect the model to have a smaller embedding size, which is why a comment is added.
Fixes #19959
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20043/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20043",
"html_url": "https://github.com/huggingface/transformers/pull/20043",
"diff_url": "https://github.com/huggingface/transformers/pull/20043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20043.patch",
"merged_at": 1667491505000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20042/events
|
https://github.com/huggingface/transformers/pull/20042
| 1,434,816,110
|
PR_kwDOCUB6oc5CJSJD
| 20,042
|
Attempting to test automatically the `_keys_to_ignore`.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh the `splinter` test failing is normal ?\r\n```\r\nFAILED tests/models/splinter/test_modeling_splinter.py::SplinterModelTest::test_save_load_fast_init_from_base - AssertionError: 3069.73388671875 not less than or equal to 0.001 : splinter_qass.query_start_transform.dense.weight not identical\r\n```",
"@Narsil \r\n\r\nI am not able to reproduce the `splinter` test failure you mentioned above with current `main` on a GCP GPU VM. Could you provide more information about your environment and how you launched the test?",
"IT's this failure : https://app.circleci.com/jobs/github/huggingface/transformers/608396",
"@ydshieh This tests now fails in the CI tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py::Wav2Vec2ConformerModelTest::test_save_load_fast_init_from_base\r\n\r\nHowever I can't seem to be able to reproduce locally ? Do you mind trying if it's my setup failing or the CI ?",
"Merging.\r\n\r\n**fingers crossed** :)",
"Hey! \r\nI am not sure if it is because of this PR but loading NLLB (that is affected by this PR) now gives:\r\n```\r\n│ /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:24 │\r\n│ 59 in _load_pretrained_model │\r\n│ │\r\n│ 2456 │ │ │ for key in missing_keys: │\r\n│ 2457 │ │ │ │ if key.startswith(prefix): │\r\n│ 2458 │ │ │ │ │ key = \".\".join(key.split(\".\")[1:]) │\r\n│ ❱ 2459 │ │ │ │ param = model_state_dict[key] │\r\n│ 2460 │ │ │ │ if param.device == torch.device(\"meta\"): │\r\n│ 2461 │ │ │ │ │ if not load_in_8bit: │\r\n│ 2462 │ │ │ │ │ │ set_module_tensor_to_device(model, key, \"cpu\", torch.empty(*para │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nKeyError: 'encoder.embed_positions.weights'\r\n```\r\nHere is the snippet to reproduce the error:\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\n\r\nsrc_lang = \"eng_Latn\"\r\ntgt_lang = \"spa_Latn\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-600M\", src_lang=src_lang)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-600M\",\r\n device_map= \"auto\")\r\n```\r\nI did not followed entirely this PR but I will dig into that now and see what exactly caused the issue 💪 \r\n\r\ncc @Narsil @sgugger "
] | 1,667
| 1,669
| 1,668
|
CONTRIBUTOR
| null |
# What does this PR do?
This adds a new part of the `tied_weights` test that aims at detecting automatically
when `_keys_to_ignore` is incorrectly set.
`_keys_to_ignore` aims to ignore weights that are supposed to be tied in the
final model, meaning it's OK if the parameter is missing from the on-disk weights.
The weights are really empty during the load, but they end up being tied afterwards
so we should ignore them during the load if they are missing.
The test also aims to detect `_keys_to_ignore` that might have been set but
could be misleading because the parameters are actually NOT tied anymore.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20042/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20042",
"html_url": "https://github.com/huggingface/transformers/pull/20042",
"diff_url": "https://github.com/huggingface/transformers/pull/20042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20042.patch",
"merged_at": 1668006216000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20041/events
|
https://github.com/huggingface/transformers/issues/20041
| 1,434,765,176
|
I_kwDOCUB6oc5VhMd4
| 20,041
|
woctezuma / stable-diffusion-colab
|
{
"login": "stromal",
"id": 19979901,
"node_id": "MDQ6VXNlcjE5OTc5OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/19979901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stromal",
"html_url": "https://github.com/stromal",
"followers_url": "https://api.github.com/users/stromal/followers",
"following_url": "https://api.github.com/users/stromal/following{/other_user}",
"gists_url": "https://api.github.com/users/stromal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stromal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stromal/subscriptions",
"organizations_url": "https://api.github.com/users/stromal/orgs",
"repos_url": "https://api.github.com/users/stromal/repos",
"events_url": "https://api.github.com/users/stromal/events{/privacy}",
"received_events_url": "https://api.github.com/users/stromal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The error is that your token is not properly registered or does not grant access to the model. If you have accepted the terms of the license online, then it's probably a bug in `huggingface_hub` not setting your token properly, so you should report an issue in that repo :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
Google Colab, Free version, GPU
### Who can help?
@NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. https://github.com/woctezuma/stable-diffusion-colab
2. https://colab.research.google.com/github/woctezuma/stable-diffusion-colab/blob/main/stable_diffusion.ipynb#scrollTo=GR4vF2bw-sHR
3. copy create to drive
4. run 1st cell
5. run 2nd cell
6. copy my token from https://huggingface.co/settings/tokens
7. paste it to the filed
8. press enter
9. #1st error - https://discuss.huggingface.co/t/invalid-token-passed/22711
10. https://huggingface.co/settings/tokens mange invalidate and refres
11. run 2nd cell again
12. copy and paste in new token
```
_| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_|
_| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_|
_| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_|
To login, `huggingface_hub` now requires a token generated from https://huggingface.co/settings/tokens .
Token:
Login successful
Your token has been saved to /root/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default
git config --global credential.helper store
```
13. I have run ```git config --global credential.helper store``` than I could rerun everything and move forward 2 cells
14. Cell CODE
```
import mediapy as media
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
remove_safety = False
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
if remove_safety:
pipe.safety_checker = lambda images, clip_input: (images, False)
pipe = pipe.to(device)
```
15. ERROR
```
[/usr/local/lib/python3.7/dist-packages/requests/models.py](https://localhost:8080/#) in raise_for_status(self)
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/fp16/model_index.json
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
233 subfolder=subfolder,
--> 234 revision=revision,
235 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout)
1056 proxies=proxies,
-> 1057 timeout=etag_timeout,
1058 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in get_hf_file_metadata(url, use_auth_token, proxies, timeout)
1358 )
-> 1359 hf_raise_for_status(r)
1360
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
253 # as well (request id and/or server error message)
--> 254 raise HfHubHTTPError(str(HTTPError), response=response) from e
255
HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: esduBFUm9KJXSxYhFffq4)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[<ipython-input-6-9b05f13f8bf3>](https://localhost:8080/#) in <module>
9
10
---> 11 pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
12 if remove_safety:
13 pipe.safety_checker = lambda images, clip_input: (images, False)
[/usr/local/lib/python3.7/dist-packages/diffusers/pipeline_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
371 local_files_only=local_files_only,
372 use_auth_token=use_auth_token,
--> 373 revision=revision,
374 )
375 # make sure we only download sub-folders and `diffusers` filenames
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
254 except HTTPError as err:
255 raise EnvironmentError(
--> 256 "There was a specific connection error when trying to load"
257 f" {pretrained_model_name_or_path}:\n{err}"
258 )
OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:
<class 'requests.exceptions.HTTPError'> (Request ID: esduBFUm9KJXSxYhFffq4)
```
### Expected behavior
Run all the cells and generating photo's as on the GitHub project shows
https://github.com/woctezuma/stable-diffusion-colab
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20041/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20040/events
|
https://github.com/huggingface/transformers/pull/20040
| 1,434,676,888
|
PR_kwDOCUB6oc5CI0Kx
| 20,040
|
`torch.finfo` issue with torch.fx
|
{
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
# What does this PR do?
This PR allows the tracing of `torch.finfo` which were added massively to model implementations recently.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20040/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20040",
"html_url": "https://github.com/huggingface/transformers/pull/20040",
"diff_url": "https://github.com/huggingface/transformers/pull/20040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20040.patch",
"merged_at": 1667488484000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20039/events
|
https://github.com/huggingface/transformers/pull/20039
| 1,434,655,344
|
PR_kwDOCUB6oc5CIvex
| 20,039
|
[Doctest] Add configuration_camembert.py
|
{
"login": "Saad135",
"id": 22683922,
"node_id": "MDQ6VXNlcjIyNjgzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saad135",
"html_url": "https://github.com/Saad135",
"followers_url": "https://api.github.com/users/Saad135/followers",
"following_url": "https://api.github.com/users/Saad135/following{/other_user}",
"gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saad135/subscriptions",
"organizations_url": "https://api.github.com/users/Saad135/orgs",
"repos_url": "https://api.github.com/users/Saad135/repos",
"events_url": "https://api.github.com/users/Saad135/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saad135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds configuration_camembert.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20039/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20039",
"html_url": "https://github.com/huggingface/transformers/pull/20039",
"diff_url": "https://github.com/huggingface/transformers/pull/20039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20039.patch",
"merged_at": 1667483442000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20038/events
|
https://github.com/huggingface/transformers/issues/20038
| 1,434,506,296
|
I_kwDOCUB6oc5VgNQ4
| 20,038
|
Amazon Sagemaker deployment issue for FLAN-T5 model family
|
{
"login": "BalazsFeherUK",
"id": 116826921,
"node_id": "U_kgDOBvajKQ",
"avatar_url": "https://avatars.githubusercontent.com/u/116826921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BalazsFeherUK",
"html_url": "https://github.com/BalazsFeherUK",
"followers_url": "https://api.github.com/users/BalazsFeherUK/followers",
"following_url": "https://api.github.com/users/BalazsFeherUK/following{/other_user}",
"gists_url": "https://api.github.com/users/BalazsFeherUK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BalazsFeherUK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BalazsFeherUK/subscriptions",
"organizations_url": "https://api.github.com/users/BalazsFeherUK/orgs",
"repos_url": "https://api.github.com/users/BalazsFeherUK/repos",
"events_url": "https://api.github.com/users/BalazsFeherUK/events{/privacy}",
"received_events_url": "https://api.github.com/users/BalazsFeherUK/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @philschmid ",
"Hello @BalazsFeherUK, \r\n\r\nIt seems that `T5-FLAN`/ `T5LayerFF` is not yet supported in `transformers==4.17.0`. You would need to update the transformers version to be able to use the model. You can check the forum on how you would do this: https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using the deployment script for Amazon Sagemaker as described on the FLAN-T5 model cards (e.g. google/flan-t5-small):
```
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
hub = {
'HF_MODEL_ID':'google/flan-t5-small',
'HF_TASK':'text2text-generation'
}
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
```
I receive the following error:
```
ModelError Traceback (most recent call last)
<ipython-input-10-eb84f66e23d1> in <module>
25
26 predictor.predict({
---> 27 'inputs': "The answer to the universe is"
28 })
/opt/conda/lib/python3.7/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id)
159 data, initial_args, target_model, target_variant, inference_id
160 )
--> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
162 return self._handle_response(response)
163
/opt/conda/lib/python3.7/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
510 )
511 # The "self" in this scope is referring to the BaseClient.
--> 512 return self._make_api_call(operation_name, kwargs)
513
514 _api_call.__name__ = str(py_operation_name)
/opt/conda/lib/python3.7/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
917 error_code = parsed_response.get("Error", {}).get("Code")
918 error_class = self.exceptions.from_code(error_code)
--> 919 raise error_class(parsed_response, operation_name)
920 else:
921 return parsed_response
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027T5LayerFF\u0027 object has no attribute \u0027config\u0027"
}
```
### Expected behavior
Model shall work when deployed on Sagemaker Studio.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20038/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20037/events
|
https://github.com/huggingface/transformers/pull/20037
| 1,434,443,800
|
PR_kwDOCUB6oc5CIBxs
| 20,037
|
Add **kwargs to preprocess method
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Agreed - I'll add splitting up the kwargs on the TODO list! "
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Fixes failing doctests with (and real life usage of) processors which contain two processing objects: one image processor + one tokenizer/feature extractor.
When the processor is called, all kwargs are passed to both processing objects e.g. in CLIPProcessor: https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/processing_clip.py#L81-L85
Image processors therefore have to be able to accept arguments they will not use when called.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20037/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20037",
"html_url": "https://github.com/huggingface/transformers/pull/20037",
"diff_url": "https://github.com/huggingface/transformers/pull/20037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20037.patch",
"merged_at": 1667479910000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20036/events
|
https://github.com/huggingface/transformers/pull/20036
| 1,434,375,255
|
PR_kwDOCUB6oc5CHzKf
| 20,036
|
Fix some doctests after PR 15775
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, I always missed the last piece. Added the final commit to fix `docs/source/en/model_doc/speech_to_text.mdx`.\r\n\r\n\r\nhttps://github.com/huggingface/transformers/pull/20036/commits/e6c4bc5f45fee4a8d3997bc4c04896fac1c25284"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
After PR #15775, we need to either update some expected values or specify `skip_special_tokens=True`.
I am not very comfortable to use `skip_special_tokens=True` for `PT_QUESTION_ANSWERING_SAMPLE` in `doc.py`, as it might fail other tests. We will have to run the doctest manually to see if everything is fine.
(A lazy way is not to use this argument, but just to update the expected values)
#### update
I launched doctest [here](https://github.com/huggingface/transformers/actions/runs/3384706275). The tests with `ForQuestionAnswering` all pass, so we are good!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20036/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20036",
"html_url": "https://github.com/huggingface/transformers/pull/20036",
"diff_url": "https://github.com/huggingface/transformers/pull/20036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20036.patch",
"merged_at": 1667481525000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20035/events
|
https://github.com/huggingface/transformers/issues/20035
| 1,434,348,899
|
I_kwDOCUB6oc5Vfm1j
| 20,035
|
Answer Mismatch: run squad_convert_examples_to_features with xlm-roberta
|
{
"login": "shutttttdown",
"id": 117346792,
"node_id": "U_kgDOBv6R6A",
"avatar_url": "https://avatars.githubusercontent.com/u/117346792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shutttttdown",
"html_url": "https://github.com/shutttttdown",
"followers_url": "https://api.github.com/users/shutttttdown/followers",
"following_url": "https://api.github.com/users/shutttttdown/following{/other_user}",
"gists_url": "https://api.github.com/users/shutttttdown/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shutttttdown/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shutttttdown/subscriptions",
"organizations_url": "https://api.github.com/users/shutttttdown/orgs",
"repos_url": "https://api.github.com/users/shutttttdown/repos",
"events_url": "https://api.github.com/users/shutttttdown/events{/privacy}",
"received_events_url": "https://api.github.com/users/shutttttdown/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report. However, as you have probably seen when executing it, `squad_convert_examples_to_features` is deprecated, so it's not maintained anymore.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
### Who can help?
@sgugger
@mfuntowicz
@aaugustin
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Overview
I am doing QA task on the xquad dataset, which is a multilingual version of squad and in squad format.
A problem occurs when I use xlm-roberta-base tokenizer to preprocess the xquad.zh.json (Chinese version) following the standard process.
Specifically, I convert a list of examples into features by a function [`squad_convert_examples_to_features`](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py) provided by huggingface and find that the answers of some orginal examples are inconsistent with the their features.
I'll just pick an example for demonstration.
### Codes
```python
import transformers
from transformers import (
AutoTokenizer,
squad_convert_examples_to_features,
)
from transformers.data.processors.squad import SquadResult, SquadV1Processor, SquadV2Processor
model_name_or_path = 'xlm-roberta-base'
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
do_lower_case=False,
cache_dir='./cache/',
use_fast=False,
)
processor = SquadV1Processor()
examples = processor.get_train_examples(None, filename='xquad.zh.json')
...
# I pick just one tmp_example from examples
...
print(tmp_example.question_text)
# '利用计算复杂性理论对计算问题进行分类的主要依据是什么?'
print(tmp_example.context_text)
# '计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
pritn(tmp_example.answer_text) # which should be the ground truth
# '固有难度'
# Initialize the features, dataset, dataloader following the standard process
features, dataset = squad_convert_examples_to_features(
examples=[tmp_example],
tokenizer=tokenizer,
max_seq_length=512,
doc_stride=128,
max_query_length=64,
is_training=True,
return_dataset="pt",
threads=4,
)
train_sampler = RandomSampler(dataset)
train_dataloader = DataLoader(dataset, sampler=train_sampler, batch_size=8)
for n,batch in enumerate(train_dataloader):
start_positions = batch[3]
end_positions = batch[4]
if type(xlm_tokenizer).__name__ in ['XLMRobertaTokenizer']:
start_positions = start_positions + 1
end_positions = end_positions + 1
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
"start_positions": start_positions,
"end_positions": end_positions,
}
print(xlm_tokenizer.decode(inputs['input_ids'][0, start_positions[0]:end_positions[0]+1]))
# '计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
# the ground truth answer span should be '固有难度', however, the actual answer span input to the model is shown as above which is inconsistent with the ground truth.
```
### Expected behavior
The ground truth answer span should be
'固有难度',
however, the actual answer span input to the model is
'计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
which is inconsistent with the ground truth.
Similar problems have been found in many examples.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20035/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20034/events
|
https://github.com/huggingface/transformers/pull/20034
| 1,434,340,015
|
PR_kwDOCUB6oc5CHrmP
| 20,034
|
[Swin] Add Swin SimMIM checkpoints
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds 2 checkpoints for Swin Transformer pre-trained using the SimMIM objective (taken from [here](https://github.com/microsoft/Swin-Transformer/blob/main/MODELHUB.md#simmim-pretrained-swin-v1-models)).
They are on the hub: https://huggingface.co/models?other=simmim
It also fixes an important bug in `modeling_swin.py` regarding the window size not being set properly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20034/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20034",
"html_url": "https://github.com/huggingface/transformers/pull/20034",
"diff_url": "https://github.com/huggingface/transformers/pull/20034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20034.patch",
"merged_at": 1667572365000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20033/events
|
https://github.com/huggingface/transformers/pull/20033
| 1,434,189,995
|
PR_kwDOCUB6oc5CHLmN
| 20,033
|
Give `modeling_t5.py` a `_prune_heads`
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20033). All of your documentation changes will be reflected on that endpoint."
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
# What does this PR do?
Run CircleCI tests for #19975
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20033/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20033",
"html_url": "https://github.com/huggingface/transformers/pull/20033",
"diff_url": "https://github.com/huggingface/transformers/pull/20033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20033.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20032/events
|
https://github.com/huggingface/transformers/issues/20032
| 1,434,184,506
|
I_kwDOCUB6oc5Ve-s6
| 20,032
|
Type annotation for `pipeline()`s
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for opening the issue. As of now, we have decided not to add any type annotations that render the code less readable. So whenever they can be added with no cost in readability, we welcome PRs, but for something more complex like you are suggesting, we would probably be less interested.",
"That seems reasonable :)\r\n\r\nFor completeness, I'll note that the `@overload`s can go in a separate `.pyi` file, leaving the main application logic clean. But the complexity would still be there, and you'd have `.pyi` files everywhere, so this is not a great solution."
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### Feature request
Currently, the return type of many functions are not defined.
```py
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
tokenizer( # no type checking/help here
```
It would be nice if the return type was set, the equivalent of me doing it manually with:
```py
tokenizer: DistilBertTokenizerFast = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
tokenizer( # now I get checking
```
### Motivation
The main benefits are probably pretty well understood...
* preventing typos in inputs that take strings (tasks, checkpoints, etc)
* autocomplete for methods/parameters in IDEs
* documentation tooltips in IDEs that support them.
### Your contribution
As you may already be aware, the pattern for enabling this is to use `Literal` and `@overload` from the `typing` module, to return a particular class based on the value of a string passed in.
A quick mock up:
```py
from typing import overload, Literal
class Model1:
def model_1_thing(self):
pass
class Model2:
def model_2_thing(self):
pass
@overload
def get_pipe(model: Literal["model_1"]) -> Model1:
pass
@overload
def get_pipe(model: Literal["model_2"]) -> Model2:
pass
def get_pipe(model):
if model == "model_1":
return Model1()
return Model2()
mod = get_pipe("model_1")
# `mod` is correctly identified as an instance of `Model1`
```
Now I not only get auto-complete when typing in the string:

But of course get the usual goodies when accessing attributes of the returned object:

To support older Python, I think [typing-extensions](https://pypi.org/project/typing-extensions/) would work, although I'm not certain.
The big question of course is whether the "developer experience" gains are worth the effort/complexity this would add to the codebase.
Apologies if this has already been discussed and decided on, I couldn't see an existing issue, but did sense a willingness to get types right from other issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20032/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20031/events
|
https://github.com/huggingface/transformers/issues/20031
| 1,434,070,429
|
I_kwDOCUB6oc5Vei2d
| 20,031
|
BigBird attention type switching
|
{
"login": "kvarekamp",
"id": 11934412,
"node_id": "MDQ6VXNlcjExOTM0NDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11934412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kvarekamp",
"html_url": "https://github.com/kvarekamp",
"followers_url": "https://api.github.com/users/kvarekamp/followers",
"following_url": "https://api.github.com/users/kvarekamp/following{/other_user}",
"gists_url": "https://api.github.com/users/kvarekamp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kvarekamp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kvarekamp/subscriptions",
"organizations_url": "https://api.github.com/users/kvarekamp/orgs",
"repos_url": "https://api.github.com/users/kvarekamp/repos",
"events_url": "https://api.github.com/users/kvarekamp/events{/privacy}",
"received_events_url": "https://api.github.com/users/kvarekamp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,675
| 1,675
|
NONE
| null |
### System Info
transformers 4.21.2 BigBirdModel
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Currently BigBird switches attention type from default 'block_sparse' to 'original_full' in the forward call if it encounters a batch that contains only sequences shorter than the minimum sequence length:
https://github.com/huggingface/transformers/blob/a2a3afbc8d26d6170909365ffba6bd75e186255f/src/transformers/models/big_bird/modeling_big_bird.py#L2064
However, it never switches back. This means that the exact same (long) sequence can be encoded differently depending on whether it was preceded by a batch containing only short sequences or not.
### Expected behavior
It should probably switch back to block_sparse if it can as well
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20031/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20030/events
|
https://github.com/huggingface/transformers/pull/20030
| 1,433,814,129
|
PR_kwDOCUB6oc5CF7U3
| 20,030
|
Now supporting pathlike in pipelines too.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/20024
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20030/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20030",
"html_url": "https://github.com/huggingface/transformers/pull/20030",
"diff_url": "https://github.com/huggingface/transformers/pull/20030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20030.patch",
"merged_at": 1667463285000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20029
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20029/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20029/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20029/events
|
https://github.com/huggingface/transformers/issues/20029
| 1,433,792,486
|
I_kwDOCUB6oc5Vde_m
| 20,029
|
Transformers scheduler doesn't alter LR of added param group after model unfreeze
|
{
"login": "rbracco",
"id": 47190785,
"node_id": "MDQ6VXNlcjQ3MTkwNzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47190785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbracco",
"html_url": "https://github.com/rbracco",
"followers_url": "https://api.github.com/users/rbracco/followers",
"following_url": "https://api.github.com/users/rbracco/following{/other_user}",
"gists_url": "https://api.github.com/users/rbracco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rbracco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rbracco/subscriptions",
"organizations_url": "https://api.github.com/users/rbracco/orgs",
"repos_url": "https://api.github.com/users/rbracco/repos",
"events_url": "https://api.github.com/users/rbracco/events{/privacy}",
"received_events_url": "https://api.github.com/users/rbracco/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The Trainer by itself does not support several parameter groups, you will need to subclass and overwrite the methods that create schedulers/optimizers.",
"Thank you! Closing for now, but I'll reopen if there is any further issue. "
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
python 3.8.10 on ubuntu 20.04
pytorch 1.12.1
transformers 4.20.1
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Add a transformers scheduler, such as `transformers.get_linear_schedule_with_warmup` to any training run that begins with a frozen model and unfreezes.
2. Monitor the learning rate, LR for param groups that are unfrozen after the 0th epoch are constant and unaffected by the scheduler
Sorry I don't have a better reproduction section. I mainly use transformers as a dependency of another library, and I tried making a reproducible script [here on Colab](https://colab.research.google.com/drive/1LXkIUP_vcVV3Vmc0sLInpIUEWU2H8K6P?usp=sharing), but Colab is crashing due to one of the imports.
### Expected behavior
I believe this is a bug but it could be expected behavior. I would expect the unfrozen param groups (param-group 2) should also be controlled by the scheduler, as they are when using pytorch schedulers such as `torch.optim.lr_scheduler.LinearLR`.
### LR of param groups after unfreeze when using `torch.optim.lr_scheduler.LinearLR`.

### LR of param groups after unfreeze when using `transformers.get_linear_LR_with_warmup`.

### LR of param groups after unfreeze when using `transformers.get_cosine_schedule_with_warmup`

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20029/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20028/events
|
https://github.com/huggingface/transformers/pull/20028
| 1,433,667,554
|
PR_kwDOCUB6oc5CFbxd
| 20,028
|
Update esmfold conversion script
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This update fixes the ESM checkpoint conversion script to work for ESMFold and fixes a bug in the example for the `EsmForProteinFolding` class.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20028/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20028",
"html_url": "https://github.com/huggingface/transformers/pull/20028",
"diff_url": "https://github.com/huggingface/transformers/pull/20028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20028.patch",
"merged_at": 1667487486000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20027/events
|
https://github.com/huggingface/transformers/pull/20027
| 1,433,624,816
|
PR_kwDOCUB6oc5CFSbl
| 20,027
|
Document BLOOM lm_logits original training behavior
|
{
"login": "shijie-wu",
"id": 2987758,
"node_id": "MDQ6VXNlcjI5ODc3NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2987758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shijie-wu",
"html_url": "https://github.com/shijie-wu",
"followers_url": "https://api.github.com/users/shijie-wu/followers",
"following_url": "https://api.github.com/users/shijie-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/shijie-wu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shijie-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shijie-wu/subscriptions",
"organizations_url": "https://api.github.com/users/shijie-wu/orgs",
"repos_url": "https://api.github.com/users/shijie-wu/repos",
"events_url": "https://api.github.com/users/shijie-wu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shijie-wu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20027). All of your documentation changes will be reflected on that endpoint.",
"cc @thomasw21 and @younesbelkada but I don't think this is necessary as bfloat16 is more numerically stable than float16?",
"we noticed considerable performance difference between softmax in bf16 and softmax in fp32 in an internal bloom-based model. since during training the softmax is conducted in fp32, this change better reflects the model behavior in megatron-deepspeed.",
"@shijie-wu can you please share a bit more about how you run the model? ",
"We redid the experiment regarding fp32 in SA. It seems like the gap we observed earlier is caused by some other issues. I have removed that part of the PR. After discussion with @thomasw21, instead of enforcing a type conversion, this PR will instead document the original behavior so that advanced users could recover it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
~Compute softmax of Bloom in fp32 during half percision~
Document BLOOM lm_logits original training behavior
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@thomasw21 @stas00 @TevenLeScao
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20027/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20027",
"html_url": "https://github.com/huggingface/transformers/pull/20027",
"diff_url": "https://github.com/huggingface/transformers/pull/20027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20027.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20026/events
|
https://github.com/huggingface/transformers/pull/20026
| 1,433,610,429
|
PR_kwDOCUB6oc5CFPQP
| 20,026
|
Show installed libraries and their versions in CI jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Whenever there is a need to check
- if the versions of install libraries change
- and/or find out which ones change
it is not super easy to get this information.
This PR adds `pip freeze | tee installed.txt` to
- show the results
- save to a file
- and upload as an artifact.
It makes the access to this information easier, and potentially make the process to **get the difference** between previous/current runs easier too.
Example run job (and the artifact): [here](https://app.circleci.com/pipelines/github/huggingface/transformers/50778/workflows/cf542f91-cc42-4942-bac8-100436555dda/jobs/606945)
I plan to do the same for GH actions jobs, but maybe in another PR :-)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20026/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20026",
"html_url": "https://github.com/huggingface/transformers/pull/20026",
"diff_url": "https://github.com/huggingface/transformers/pull/20026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20026.patch",
"merged_at": 1667418760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20025/events
|
https://github.com/huggingface/transformers/issues/20025
| 1,433,609,896
|
I_kwDOCUB6oc5Vcyao
| 20,025
|
T5 should not use teacher-forcing when under evaluation
|
{
"login": "PartiallyTyped",
"id": 52372765,
"node_id": "MDQ6VXNlcjUyMzcyNzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/52372765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PartiallyTyped",
"html_url": "https://github.com/PartiallyTyped",
"followers_url": "https://api.github.com/users/PartiallyTyped/followers",
"following_url": "https://api.github.com/users/PartiallyTyped/following{/other_user}",
"gists_url": "https://api.github.com/users/PartiallyTyped/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PartiallyTyped/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PartiallyTyped/subscriptions",
"organizations_url": "https://api.github.com/users/PartiallyTyped/orgs",
"repos_url": "https://api.github.com/users/PartiallyTyped/repos",
"events_url": "https://api.github.com/users/PartiallyTyped/events{/privacy}",
"received_events_url": "https://api.github.com/users/PartiallyTyped/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for discussions like this one as we keep issues for bugs and feature requests in the library.",
"Then perhaps the ability to turn off teacher-forcing could be listed as a feature for the model?\r\n\r\nI don’t see how using teacher forcing in an auto regressive model during evaluation is not a bug.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-6.0.2-76060002-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce behaviour:
1. Find a task where the same token in the target sequence is repeated multiple times, e.g. IOB Tagging.
1. Train a t5 model on a task
3. Evaluate by predicting
4. Evaluate using generate and giving the same input_ids
### Expected behavior
The prediction task during evaluation should produce identical results to generate using num beams=1 ie greedy decoding.
Due to the nature of the task above, the target sequence has the form ((a|b|c){n})+, meaning sequences of target id repetitions, e.g. `000011110000`, due to teacher forcing, the model can learn to repeat the input token and quickly minimize the loss except for the boundary cases of `01` and `10`.
This is not visible when evaluating the model (e.g. via `.eval()`), and due to teacher forcing, the model gives excellent results. However, when using `.generate` with the same inputs, the model behaves horribly.
This is similar to
#12488
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20025/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20024/events
|
https://github.com/huggingface/transformers/issues/20024
| 1,433,559,955
|
I_kwDOCUB6oc5VcmOT
| 20,024
|
`Pathlike` objects are treated as `AutoModel` objects in `pipeline` initialization
|
{
"login": "harsh8398",
"id": 20420308,
"node_id": "MDQ6VXNlcjIwNDIwMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/20420308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harsh8398",
"html_url": "https://github.com/harsh8398",
"followers_url": "https://api.github.com/users/harsh8398/followers",
"following_url": "https://api.github.com/users/harsh8398/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh8398/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harsh8398/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh8398/subscriptions",
"organizations_url": "https://api.github.com/users/harsh8398/orgs",
"repos_url": "https://api.github.com/users/harsh8398/repos",
"events_url": "https://api.github.com/users/harsh8398/events{/privacy}",
"received_events_url": "https://api.github.com/users/harsh8398/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Excellent suggestion! \r\n\r\nI opened a PR https://github.com/huggingface/transformers/pull/20030"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.22.2
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour:
1. Run the following code snippet
```python
from transformers import pipeline
from pathlib import Path
# store pipeline locally
gen = pipeline("image-classification", "google/vit-base-patch16-224")
gen.save_pretrained(Path("./models") / "google/vit-base-patch16-224")
# load pipeline locally
new_gen = pipeline(
"image-classification", Path("./models") / "google/vit-base-patch16-224"
)
```
Loading pipeline fails with following error:
<img width="1010" alt="image" src="https://user-images.githubusercontent.com/20420308/199565245-190f8291-d691-48f4-bd43-3a4500bc225d.png">
This works file if I pass string path:
```
new_gen = pipeline(
"image-classification", str(Path("./models") / "google/vit-base-patch16-224")
)
```
### Expected behavior
The pipeline model argument should check for Pathlike objects and not treat them the same as AutoModel instances.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20024/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20023/events
|
https://github.com/huggingface/transformers/pull/20023
| 1,433,520,618
|
PR_kwDOCUB6oc5CE71e
| 20,023
|
Fix doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for fixing!",
"Thanks @ydshieh! Sorry for breaking our streak of 6 days of no failures! ",
"Oh, I haven't request yet and you already approved! Thanks a lot, so impressive your speed of response!",
"\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I am so bad in doctest ... What we need here is actually `# doctest: +IGNORE_RESULT`. Sorry."
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
We just need `# doctest: +IGNORE_RESULT` after `>>> dataset = load_dataset` as usual
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20023/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20023",
"html_url": "https://github.com/huggingface/transformers/pull/20023",
"diff_url": "https://github.com/huggingface/transformers/pull/20023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20023.patch",
"merged_at": 1667414245000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20022/events
|
https://github.com/huggingface/transformers/pull/20022
| 1,433,496,612
|
PR_kwDOCUB6oc5CE2qd
| 20,022
|
[Audio Processor] Only pass sr to feat extractor
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Currently, I've only applied the change to the Wav2Vec2 Processor - once we're happy with the change I'll copy it to all audio processor classes. I've only done it to this one first to make the review easier!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
The audio processor is composed of two components:
1. Feature extractor (input audio -> normalised audio)
2. Tokenizer (target text-> label ids)
Of these two components, the `audio` inputs and `sampling_rate` are arguments that are applicable to the feature extractor only. The `text` is applicable to the tokenizer only.
Currently, we only isolate the `audio` for the feature extractor and `text` for the tokenizer. However, the `sampling_rate` is passed to **both** the feature extractor and tokenizer. Thus, we get a warning for an unrecognized keyword argument in the tokenizer:
```python
from transformers import Wav2Vec2Processor
import numpy as np
audio = np.ones((2, 1000))
text = ['the cat', 'sat on']
processor = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base')
out = processor(audio, sampling_rate=16000, text=text)
```
```
Keyword arguments {'sampling_rate': 16000} not recognized.
```
This PR splits the `sampling_rate` from the kwargs and passes it only to the feature extractor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20022/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20022/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20022",
"html_url": "https://github.com/huggingface/transformers/pull/20022",
"diff_url": "https://github.com/huggingface/transformers/pull/20022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20022.patch",
"merged_at": 1667897943000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20021/events
|
https://github.com/huggingface/transformers/pull/20021
| 1,433,303,438
|
PR_kwDOCUB6oc5CEMpL
| 20,021
|
Update auto processor to check image processor created
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Fixes failing test which was checking if a feature extractor was loaded. This is now updated to reflect that an image processor is now loaded.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20021/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20021",
"html_url": "https://github.com/huggingface/transformers/pull/20021",
"diff_url": "https://github.com/huggingface/transformers/pull/20021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20021.patch",
"merged_at": 1667402373000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20020/events
|
https://github.com/huggingface/transformers/issues/20020
| 1,433,302,225
|
I_kwDOCUB6oc5VbnTR
| 20,020
|
When using GPT2,CPU usage is high
|
{
"login": "mazzzystar",
"id": 6824141,
"node_id": "MDQ6VXNlcjY4MjQxNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6824141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mazzzystar",
"html_url": "https://github.com/mazzzystar",
"followers_url": "https://api.github.com/users/mazzzystar/followers",
"following_url": "https://api.github.com/users/mazzzystar/following{/other_user}",
"gists_url": "https://api.github.com/users/mazzzystar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mazzzystar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mazzzystar/subscriptions",
"organizations_url": "https://api.github.com/users/mazzzystar/orgs",
"repos_url": "https://api.github.com/users/mazzzystar/repos",
"events_url": "https://api.github.com/users/mazzzystar/events{/privacy}",
"received_events_url": "https://api.github.com/users/mazzzystar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep the issues for bugs and feature requests only.",
"Try\r\n\r\n```python\r\ngpt2_pipe = pipeline('text-generation', model='XXX', tokenizer='gpt2', device=0) # or `cuda:0`\r\n```"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
transformers: 4.23.1
### Who can help?
Models:
- GPT-2 @patil-suraj, @patrickvonplaten, @LysandreJik
Library
- Pipelines @Narsil
### Reproduction
Questions: When using GPT-2 as:
```python
from transformers import pipeline
gpt2_pipe = pipeline('text-generation', model='XXX', tokenizer='gpt2')
starting_text = "a young boy"
response = gpt2_pipe(starting_text, max_length=60, num_return_sequences=1)
```
The CPU usage keep being > 90% for a few seconds, the generation is also slow.
However, when I manually change the `transformers` package in:
https://github.com/huggingface/transformers/blob/49b77b89ea1e89a9940f2b84da1bcc0696ecb07a/src/transformers/pipelines/text_generation.py#L229
to
```python
self.model = self.model.to('cuda')
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
```
namely changing the variables to CUDA version, the CPU usage was much lower, and the speed is faster.
However, I could not find a good way to use cuda in `gpt2_pipline` such as `sd_pipeline` where you only need to add a `.to('cuda')`.
### Expected behavior
Can anyone give some advices? It's not an elegant way to fix the problem by changing the `transformers` package source code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20020/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20019/events
|
https://github.com/huggingface/transformers/issues/20019
| 1,433,207,190
|
I_kwDOCUB6oc5VbQGW
| 20,019
|
ConnectionError when downloading weights
|
{
"login": "LivC93",
"id": 97181619,
"node_id": "U_kgDOBcrfsw",
"avatar_url": "https://avatars.githubusercontent.com/u/97181619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LivC93",
"html_url": "https://github.com/LivC93",
"followers_url": "https://api.github.com/users/LivC93/followers",
"following_url": "https://api.github.com/users/LivC93/following{/other_user}",
"gists_url": "https://api.github.com/users/LivC93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LivC93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LivC93/subscriptions",
"organizations_url": "https://api.github.com/users/LivC93/orgs",
"repos_url": "https://api.github.com/users/LivC93/repos",
"events_url": "https://api.github.com/users/LivC93/events{/privacy}",
"received_events_url": "https://api.github.com/users/LivC93/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Looks like it's a connection issue on your end. There is no special credentials needed to load the model :-)",
"> Looks like it's a connection issue on your end. There is no special credentials needed to load the model :-)\r\n\r\n@sgugger thanks for the fast answer. if I write like this:\r\n\r\n`model = EsmForProteinFolding.from_pretrained(\"https://dl.fbaipublicfiles.com/fair-esm/models/esmfold_3B_v1.pt\")`\r\n\r\nIt works just fine I get no errors, but I get the following warning\r\n\r\n```\r\nUserWarning: Using `from_pretrained` with the url of a file (here https://dl.fbaipublicfiles.com/fair-esm/models/esmfold_3B_v1.pt) is deprecated and won't be possible anymore in v5 of Transformers. You should host your file on the Hub (hf.co) instead and use the repository ID. Note that this is not compatible with the caching system (your file will be downloaded at each execution) or multiple processes (each process will download the file in a different temporary file).\r\n f\"Using `from_pretrained` with the url of a file (here {url}) is deprecated and won't be possible anymore in\"\r\n```",
"Yes, you should use the repo ID from the Hub.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"adding git pull to the user.bat file fixed the issue for me, turns out automatic 1111 was out of date ( incase anyone else having same issue and sees this ) \r\n"
] | 1,667
| 1,673
| 1,670
|
NONE
| null |
### System Info
transformers.__version__: 4.24.0
python: 3.7.13
OS: Ubuntu 22.04.1 LTS
conda 4.12.0
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, EsmForProteinFolding
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
```
Error:
```
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 443, in _error_catcher
yield
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 566, in read
data = self._fp_read(amt) if not fp_closed else b""
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 524, in _fp_read
data = self._fp.read(chunk_amt)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/http/client.py", line 465, in read
n = self.readinto(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/http/client.py", line 509, in readinto
n = self.fp.readinto(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/requests/models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 627, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 592, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 448, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get_esm.py", line 5, in <module>
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2091, in from_pretrained
resolved_archive_file = cached_file(pretrained_model_name_or_path, filename, **cached_file_kwargs)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/transformers/utils/hub.py", line 420, in cached_file
local_files_only=local_files_only,
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 1231, in hf_hub_download
headers=headers,
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 490, in http_get
for chunk in r.iter_content(chunk_size=1024):
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/requests/models.py", line 822, in generate
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
```
### Expected behavior
The weights are getting downloaded and at around 8-10% i get the above error. This behaviour is solved if I use my Uni VPN.
Do I need special credentials to use ESMFold? Why would it work over VPN but not directly?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20019/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20019/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20018/events
|
https://github.com/huggingface/transformers/issues/20018
| 1,433,177,766
|
I_kwDOCUB6oc5VbI6m
| 20,018
|
Does `prune_heads` really speed up during inference?
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. I am not aware of any place in the doc where we advertise head-pruning as a mean to speed up inference. I think you will need to look at converting your model to ONNX or quantize it for that.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.13.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi I have a code here, based on your examples. I try to use `bert` to test inference speed
The problem is no matter whether I pruned the model, the time seems to remain the same
The code is belows
```
from transformers import AutoTokenizer, AutoModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
device='cuda'
prune_heads = {}
prune_heads[0] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[1] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[2] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[3] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[4] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[5] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[6] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[7] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[8] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[9] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[10] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[11] = [0,1,2,3,4,5,6,7,8,9]
'''Whether to prune'''
# model.prune_heads(prune_heads).
inputs = tokenizer("Hello world!", return_tensors="pt").to(device)
model=model.to(device)
model.eval()
import time
cnt=0
for i in range(3):
outputs = model(**inputs)
for i in range(10):
torch.cuda.synchronize()
start = time.perf_counter()
outputs = model(**inputs)
torch.cuda.synchronize()
end = time.perf_counter()
print(i,":",end-start)
cnt += (end-start)
# print(outputs)
print(cnt)
```
### Expected behavior
The problem is no matter whether I pruned the model, the time seems to remain the same
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20018/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20017/events
|
https://github.com/huggingface/transformers/pull/20017
| 1,432,950,362
|
PR_kwDOCUB6oc5CDAKp
| 20,017
|
fix gradient checkpoint tests in encoder-decoder
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Fix the test `test_training_gradient_checkpointing` in `test_modeling_encoder_decoder.py`.
The current error is
```python
RuntimeError: Expected all tensors to be on the same device, but found at least two device
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20017/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20017",
"html_url": "https://github.com/huggingface/transformers/pull/20017",
"diff_url": "https://github.com/huggingface/transformers/pull/20017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20017.patch",
"merged_at": 1667394909000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20016/events
|
https://github.com/huggingface/transformers/pull/20016
| 1,432,811,865
|
PR_kwDOCUB6oc5CChuD
| 20,016
|
Add model parallelism to CodeGen
|
{
"login": "LostBenjamin",
"id": 6451553,
"node_id": "MDQ6VXNlcjY0NTE1NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6451553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LostBenjamin",
"html_url": "https://github.com/LostBenjamin",
"followers_url": "https://api.github.com/users/LostBenjamin/followers",
"following_url": "https://api.github.com/users/LostBenjamin/following{/other_user}",
"gists_url": "https://api.github.com/users/LostBenjamin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LostBenjamin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LostBenjamin/subscriptions",
"organizations_url": "https://api.github.com/users/LostBenjamin/orgs",
"repos_url": "https://api.github.com/users/LostBenjamin/repos",
"events_url": "https://api.github.com/users/LostBenjamin/events{/privacy}",
"received_events_url": "https://api.github.com/users/LostBenjamin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20016). All of your documentation changes will be reflected on that endpoint.",
"ok, I did not know that. Thanks for the info!\r\n\r\nDoes `device_map=\"auto\"` also work for CodeGen?",
"Yes, it's supported (at least on the main branch)!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Thanks for your PR but this code for model parallelism is deprecated and will be removed from other code files. To use parallelism, please load the model with `device_map=\"auto\"`.\r\n\r\nI'm a bit confused about model parallelism in Huggingface. I'm trying to fine-tune a CodeGen model using Huggingface Trainer. Is loading the model with `device_map=\"auto\"` the right way to enable model parallelism?\r\n\r\nFrom what I read [here](https://huggingface.co/docs/transformers/v4.28.1/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism), model parallelism is only supported by GPT2 and T5."
] | 1,667
| 1,682
| 1,670
|
NONE
| null |
This PR adds model parallelisim to the CodeGen model. I have been using this since August.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20016/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20016",
"html_url": "https://github.com/huggingface/transformers/pull/20016",
"diff_url": "https://github.com/huggingface/transformers/pull/20016.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20016.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20014/events
|
https://github.com/huggingface/transformers/pull/20014
| 1,432,732,063
|
PR_kwDOCUB6oc5CCQmd
| 20,014
|
chore: remove inference code, add pt framework.
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, I missed that PR. I thought this was outside the scope of task guides. Sounds good to not go ahead with this change. Let's still add the pt framework block",
"Agreed. "
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
To keep it consistent with the other docs ([image classification](https://huggingface.co/docs/transformers/tasks/image_classification), [audio classification](https://huggingface.co/docs/transformers/tasks/audio_classification), etc.), this PR:
* removes inference code
* adds separation for PT as a framework
in the [semantic segmentation](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) guide.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20014/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20014",
"html_url": "https://github.com/huggingface/transformers/pull/20014",
"diff_url": "https://github.com/huggingface/transformers/pull/20014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20014.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20013/events
|
https://github.com/huggingface/transformers/pull/20013
| 1,432,606,445
|
PR_kwDOCUB6oc5CB1yM
| 20,013
|
Add RocBert
|
{
"login": "sww9370",
"id": 8551423,
"node_id": "MDQ6VXNlcjg1NTE0MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8551423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sww9370",
"html_url": "https://github.com/sww9370",
"followers_url": "https://api.github.com/users/sww9370/followers",
"following_url": "https://api.github.com/users/sww9370/following{/other_user}",
"gists_url": "https://api.github.com/users/sww9370/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sww9370/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sww9370/subscriptions",
"organizations_url": "https://api.github.com/users/sww9370/orgs",
"repos_url": "https://api.github.com/users/sww9370/repos",
"events_url": "https://api.github.com/users/sww9370/events{/privacy}",
"received_events_url": "https://api.github.com/users/sww9370/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Thanks for your suggestion, I already fixed it ~",
"@ArthurZucker hi, I already fixed the code according to sgugger's advice., could you please review it, thanks!",
"Yes! Doing this asap 🤗 sorry for the delay ",
"Last comment, it seems that the issue with naming still persists, we should make sure to either write `RoC` or `Roc` everywhere. ",
"@ArthurZucker I didn't make [weiweishi/roc-bert-base-zh](https://huggingface.co/weiweishi/roc-bert-base-zh) public before, it's avaliable now, and other issues are resolved~"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
This PR adds the [RocBert model](https://aclanthology.org/2022.acl-long.65.pdf).
RocBert is a pre-trained Chinese language model that is designed from the ground up to be robust against maliciously crafted adversarial texts such as misspellings, homograph attacks, and other forms of deception.

This property is crucial in downstream applications like content moderation.
RocBert differs from the classic Bert architecture in the following ways:
- besides token ids, the model also takes phonetic features and glyph features as input
- the model is also pre-trained with a contrastive learning objective that stabilizes the feature space against synthetic attacks
Since the model structure and tokenizer is quite different from existing implementations, we would like to submit this PR to add a new model class.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20013/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20013/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20013",
"html_url": "https://github.com/huggingface/transformers/pull/20013",
"diff_url": "https://github.com/huggingface/transformers/pull/20013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20013.patch",
"merged_at": 1667919824000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20015
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20015/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20015/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20015/events
|
https://github.com/huggingface/transformers/issues/20015
| 1,432,802,109
|
I_kwDOCUB6oc5VZtM9
| 20,015
|
Request for Examples on Correct Use the CLIPText Model (transformers.CLIPTextModel)
|
{
"login": "mbdzi",
"id": 112744187,
"node_id": "U_kgDOBrhW-w",
"avatar_url": "https://avatars.githubusercontent.com/u/112744187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbdzi",
"html_url": "https://github.com/mbdzi",
"followers_url": "https://api.github.com/users/mbdzi/followers",
"following_url": "https://api.github.com/users/mbdzi/following{/other_user}",
"gists_url": "https://api.github.com/users/mbdzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbdzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbdzi/subscriptions",
"organizations_url": "https://api.github.com/users/mbdzi/orgs",
"repos_url": "https://api.github.com/users/mbdzi/repos",
"events_url": "https://api.github.com/users/mbdzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbdzi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'm transferring this to transformers repo as this is more related to the docs there\r\n\r\ncc @stevhliu ",
"Hi,\r\n\r\nThe docs is actually correct, CLIPTextModel can be used to encode text into a vector representation (embedding).\r\n\r\n```\r\nfrom transformers import CLIPTokenizer, CLIPTextModel\r\n\r\nmodel = CLIPTextModel.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer = CLIPTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\ninputs = tokenizer([\"a photo of a cat\", \"a photo of a dog\"], padding=True, return_tensors=\"pt\")\r\n\r\noutputs = model(**inputs)\r\nlast_hidden_state = outputs.last_hidden_state\r\npooled_output = outputs.pooler_output # pooled (EOS token) states\r\n````\r\n\r\nSo you have a set of texts (here \"a photo of a cat\" and \"a photo of a dog\") which you first prepare for the model using the tokenizer. Next, you forward them through CLIP's text encoder to get an embedding (here called \"pooled output\") out, which is of shape (batch_size, hidden_size), which in this case will be (2, 512).\r\n\r\nThat's all the CLIP text encoder does! Turn text into embedding vectors.\r\n\r\nSo no you can't use only the CLIP text encoder to perform zero-shot classification of images, for that you need both the image and text encoders (which is what `CLIPModel` is; it consists of both `CLIPTextModel` and `CLIPVisionModel`)."
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
From the documentation on the [Hugging Face Hub](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), it is not clear:
1. How the CLIPTextModel can be used.
2. If it can be used to conduct Zero Shot Classification of textual inputs using a predefined list of tokens as classes. To be precise, let us say I have a list dataset called x with two strings such that x = ["father holding a baby", "man at war"], and; a list of classes or image generation tokens; such that y = ["family time", "next generation weapons"]: Can I use the CLIPTextModel to class x using y?
In short, I am asking for an example on:
1. The correct usage of the CLIPTextModel.
2. The use-cases for the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20015/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20012/events
|
https://github.com/huggingface/transformers/pull/20012
| 1,432,333,022
|
PR_kwDOCUB6oc5CA8E4
| 20,012
|
Make sentencepiece import conditional in BertJapaneseTokenizer
|
{
"login": "ripose-jp",
"id": 72582120,
"node_id": "MDQ6VXNlcjcyNTgyMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/72582120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ripose-jp",
"html_url": "https://github.com/ripose-jp",
"followers_url": "https://api.github.com/users/ripose-jp/followers",
"following_url": "https://api.github.com/users/ripose-jp/following{/other_user}",
"gists_url": "https://api.github.com/users/ripose-jp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ripose-jp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ripose-jp/subscriptions",
"organizations_url": "https://api.github.com/users/ripose-jp/orgs",
"repos_url": "https://api.github.com/users/ripose-jp/repos",
"events_url": "https://api.github.com/users/ripose-jp/events{/privacy}",
"received_events_url": "https://api.github.com/users/ripose-jp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Sentencepiece is an optional dependency, but #19769 has unconditionally imported it into `tokenization_bert_japanese.py`.
I've written a wrapper library for another library that depends directly on transformers and today have found most of my tests failing with the error ` ModuleNotFoundError: No module named 'sentencepiece'`.
This fixes the issue by calling `is_sentencepiece_available()` before the import.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Models: @LysandreJik
Library: @n1t0
Documentation: @sgugger
@r-terada @hiroshi-matsuda-rit
I'm not too familiar with this project, so I'm copying the tags from #19769
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20012/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20012/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20012",
"html_url": "https://github.com/huggingface/transformers/pull/20012",
"diff_url": "https://github.com/huggingface/transformers/pull/20012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20012.patch",
"merged_at": 1667389478000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20011
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20011/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20011/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20011/events
|
https://github.com/huggingface/transformers/issues/20011
| 1,432,187,732
|
I_kwDOCUB6oc5VXXNU
| 20,011
|
sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
|
{
"login": "showpiecep",
"id": 81421173,
"node_id": "MDQ6VXNlcjgxNDIxMTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/81421173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/showpiecep",
"html_url": "https://github.com/showpiecep",
"followers_url": "https://api.github.com/users/showpiecep/followers",
"following_url": "https://api.github.com/users/showpiecep/following{/other_user}",
"gists_url": "https://api.github.com/users/showpiecep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/showpiecep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/showpiecep/subscriptions",
"organizations_url": "https://api.github.com/users/showpiecep/orgs",
"repos_url": "https://api.github.com/users/showpiecep/repos",
"events_url": "https://api.github.com/users/showpiecep/events{/privacy}",
"received_events_url": "https://api.github.com/users/showpiecep/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It looks like you are using the tokenizer with a broken sentencepiece vocab. In any case, we would need a reproducer with a file we have access to to be able to investigate.",
"Ran into the same issue. How did you solve it?",
"> Ran into the same issue. How did you solve it?\r\n\r\nThe whole problem was the vocab. I just took a different one.",
"Whats wrong with vocab? how to change it correct?",
"> Whats wrong with vocab? how to change it correct?\r\n\r\nmake sure your vocab files(*.bin files) have been downloaded fully.in my case, I didn't install git-lfs. git clone repo from huggingface is failed for these files. download these files or use git-lfs."
] | 1,667
| 1,691
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import T5Tokenizer
tokenizer = T5Tokenizer(vocab_file='vocab_ruturk.spm')
Traceback (most recent call last):
File "app.py", line 3, in <module>
tokenizer = T5Tokenizer(vocab_file='vocab.ruturk.spm')
File "env\lib\site-packages\transformers\models\t5\tokenization_t5.py", line 157, in __init__
self.sp_model.Load(vocab_file)
File "env\lib\site-packages\sentencepiece\__init__.py", line 910, in Load
return self.LoadFromFile(model_file)
File "env\lib\site-packages\sentencepiece\__init__.py", line 311, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: a\sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Expected behavior
No errors
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20011/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20010/events
|
https://github.com/huggingface/transformers/pull/20010
| 1,432,158,701
|
PR_kwDOCUB6oc5CAWsw
| 20,010
|
Reorganize glossary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This PR reorganizes the glossary to be alphabetical, and words under the General Terms can be linked to.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20010/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20010",
"html_url": "https://github.com/huggingface/transformers/pull/20010",
"diff_url": "https://github.com/huggingface/transformers/pull/20010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20010.patch",
"merged_at": 1667433498000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20009/events
|
https://github.com/huggingface/transformers/pull/20009
| 1,431,992,936
|
PR_kwDOCUB6oc5B_yuh
| 20,009
|
Make convert_to_onnx runable as script again
|
{
"login": "mcernusca",
"id": 31384,
"node_id": "MDQ6VXNlcjMxMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/31384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcernusca",
"html_url": "https://github.com/mcernusca",
"followers_url": "https://api.github.com/users/mcernusca/followers",
"following_url": "https://api.github.com/users/mcernusca/following{/other_user}",
"gists_url": "https://api.github.com/users/mcernusca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcernusca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcernusca/subscriptions",
"organizations_url": "https://api.github.com/users/mcernusca/orgs",
"repos_url": "https://api.github.com/users/mcernusca/repos",
"events_url": "https://api.github.com/users/mcernusca/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcernusca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20009). All of your documentation changes will be reflected on that endpoint.",
"> It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nThanks, CircleCI supposedly has access to all my repositories now but I'm not sure how to re-trigger the tests. Sorry if I'm missing something obvious.",
"You can try an empty commit (`git commit -m \"Trigger CI\" --allow-empty`)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry it fell under our radar. Don't hesitate to ping me next time it happens!"
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
Fix `convert_graph_to_onnx.py` script crash by replacing relative import.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes `python transformers/convert_graph_to_onnx.py` crash with error `Error while converting the model: attempted relative import with no known parent package` . I found this was previously fixed in #10857 and regressed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20009/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20009",
"html_url": "https://github.com/huggingface/transformers/pull/20009",
"diff_url": "https://github.com/huggingface/transformers/pull/20009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20009.patch",
"merged_at": 1670256519000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20008/events
|
https://github.com/huggingface/transformers/issues/20008
| 1,431,991,901
|
I_kwDOCUB6oc5VWnZd
| 20,008
|
How can I access prompt scores/logprobs?
|
{
"login": "hxiaoyang",
"id": 98200137,
"node_id": "U_kgDOBdpqSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98200137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hxiaoyang",
"html_url": "https://github.com/hxiaoyang",
"followers_url": "https://api.github.com/users/hxiaoyang/followers",
"following_url": "https://api.github.com/users/hxiaoyang/following{/other_user}",
"gists_url": "https://api.github.com/users/hxiaoyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hxiaoyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hxiaoyang/subscriptions",
"organizations_url": "https://api.github.com/users/hxiaoyang/orgs",
"repos_url": "https://api.github.com/users/hxiaoyang/repos",
"events_url": "https://api.github.com/users/hxiaoyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/hxiaoyang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @xiaoyangnickhu 👋 We do have a tool for that, but which is not yet documented: \r\n👉 [compute_transition_beam_scores](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876)\r\n\r\nTheir arguments should be self explanatory, but let me know if you'd like further guidance :)",
"Thanks for responding! Some followup questions:\r\n1. Since this function only has access to `scores` for generated tokens, I am having a hard time understanding why the returned `transition_scores` might contain scores for prompt/input tokens. Could you please maybe clarify where this function deals with prompt tokens?\r\n2. Do I need to switch to beam search to use this function? (I have been using greedy decoding.)\r\n3. Also, can I compute prompt token logprobs by applying `log_softmax` to `model(input_ids).logits`?\r\n\r\nThanks!",
"@xiaoyangnickhu \r\n1. It does not include the score for the input prompt. The concept of \"score\" is derived from the assumption in autoregressive language generation where the probability distribution of a word sequence can be decomposed into the product of conditional next-word distributions. The probability of the input tokens is `1` for the input, so we don't include it there.\r\n2. For greedy decoding you'll have to do it manually for now, i.e., gather from the `scores` output the selected token scores (see docs [here](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation_utils.GreedySearchDecoderOnlyOutput)). We may include an output for this in the future :)\r\n3. EDIT: Yes you can. ",
"Thanks! Appreciate the details. Some followups regarding 3:\r\n\r\nI see that for each input token `xi`, you feed the sequence `[x1,x2,...,xi]` to `model` to obtain the logits. What is the reasoning behind this? We have been doing `model([x1,x2,...,xn]).logits` (i.e., give `model` the full sequence and apply `log_softmax` to each token); is this the wrong approach? (My goal here is to obtain the vector `[None, logprob(x2|x1), logprob(x3|x1,x2),...,logprob(xn|x1,...,x_{n-1})]`)",
"Closed the issue by mistake. Sorry...",
"@gante Would you be able to take a look at my questions above? Thanks!",
"@xiaoyangnickhu oops, you are absolutely right, you can obtain the conditional logits that way! (I've been so used to work on generate, one token at a time, that forgot that the `.logits` output holds the desired output for all steps).\r\n\r\nI've edited my answer above in case someone stumbles across this thread :)",
"Thanks!!",
"@gante @hxiaoyang hello, I am using bloom with the API. I need these scores/logprob for input similar to what we can get in OpenAI. Is there a way?",
"Hey @goelnaman -- by API, what do you mean exactly? I don't think most APIs support it, but I'd be able to tag the right team member :)",
"Thanks @gante I have tried ... InferenceApi() and requests.request() but didn't see logprobs of input in any of these.\r\n\r\nIn OpenAI API, one can get this information by using echo=True, logprobs=... for example.",
"Hey @goelnaman -- I can confirm that it does not return the scores (API docs [here](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task))\r\n\r\nThe [`text-generation-inference`](https://github.com/huggingface/text-generation-inference) solution also doesn't support it. The only way to get it at the moment is:\r\n1. With local python code, as discussed in this issue\r\n2. With the [Inference Endpoints](https://huggingface.co/inference-endpoints), where you can configure any API.\r\n\r\nSadly, I do not have examples for 2. (it's on my todo list :) )"
] | 1,667
| 1,679
| 1,667
|
NONE
| null |
I have tried `model.generate(**inputs, return_dict_in_generate=True, output_scores=True)` but it only gives the scores for generated tokens. For my application, it would be convenient if there’s a similar parameter to `echo` in the OpenAI GPT-3 API that lets us access prompt scores/logprobs, but I have yet to find it. Any help is appreciated!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20008/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20008/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20007/events
|
https://github.com/huggingface/transformers/issues/20007
| 1,431,978,022
|
I_kwDOCUB6oc5VWkAm
| 20,007
|
RuntimeError: Failed to import transformers.models.flaubert.modeling_flaubert because of the following error (look up to see its traceback): module 'signal' has no attribute 'SIGKILL'
|
{
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427",
"actually reopening as bug on Windows PyTorch side makes Transformers crash",
"I'm not sure what you intend us to do to fix this, since it comes from PyTorch?",
"for pip would propose to add to requirement.txt torch<=1.12.1 ? and for conda feedstocks' environment.yaml pytorch<=1.12.1\r\npoint is Transformers do really crash on Windows with PyTorch=1.13.0",
"PyTorch is already pinned in the setup.",
"indeed, not yet visible downstream (pip, conda) as of currentbut quite right https://github.com/huggingface/transformers/blame/main/setup.py#L166\r\nhttps://github.com/huggingface/transformers/pull/19989\r\nclosing"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger, @D3xter1922
@malfet (we now come across each other daily, the world is so small...)
Opening here to document for the wider crowd, this is actually a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427
cf. [Removed XLMModel inheritance from FlaubertModel(torch+tf)](https://github.com/huggingface/transformers/commit/ed858f535474d822615f846917254d586d2a5a31)
cf in [particular blame lines 26-45](https://github.com/huggingface/transformers/blame/main/src/transformers/models/flaubert/modeling_flaubert.py)
error [caused by `from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead` line 37]
(https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/modeling_flaubert.py#L37)
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Opening here to document for the wider crowd, this is a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427
cf. complete stack trace for your reference only and closing
error [caused by `from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead` line 37]
(https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/modeling_flaubert.py#L37)
this goes down to the accelerate package
`from transformers import (FlaubertWithLMHeadModel)`
```
NOTE: Redirects are currently not supported in Windows or MacOs.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1076, in _LazyModule._get_module(self, module_name)
1075 try:
-> 1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
File ~\miniconda3\envs\MyEnv\lib\importlib\__init__.py:127, in import_module(name, package) 126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:680, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:850, in exec_module(self, module)
File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\models\flaubert\modeling_flaubert.py:37, in <module>
29 from ...modeling_outputs import (
30 BaseModelOutput,
31 MaskedLMOutput,
(...)
35 TokenClassifierOutput,
36 )
---> 37 from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead
38 from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\modeling_utils.py:78, in <module>
77 if is_accelerate_available():
---> 78 from accelerate import __version__ as accelerate_version
79 from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\__init__.py:7, in <module>
5 __version__ = "0.13.2"
----> 7 from .accelerator import Accelerator
8 from .big_modeling import cpu_offload, disk_offload, dispatch_model, init_empty_weights, load_checkpoint_and_dispatch
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\accelerator.py:27, in <module>
25 import torch
---> 27 from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
28 from .data_loader import prepare_data_loader
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\checkpointing.py:24, in <module>
22 from torch.cuda.amp import GradScaler
---> 24 from .utils import (
25 MODEL_NAME,
26 OPTIMIZER_NAME,
27 RNG_STATE_NAME,
28 SCALER_NAME,
29 SCHEDULER_NAME,
30 get_pretty_name,
31 is_tpu_available,
32 save,
33 )
36 if is_tpu_available(check_device=False):
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\utils\__init__.py:96, in <module>
87 from .deepspeed import (
88 DeepSpeedEngineWrapper,
89 DeepSpeedOptimizerWrapper,
(...)
93 HfDeepSpeedConfig,
94 )
---> 96 from .launch import PrepareForLaunch, _filter_args, get_launch_prefix
97 from .memory import find_executable_batch_size
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\utils\launch.py:25, in <module>
24 if is_torch_version(">=", "1.9.0"):
---> 25 import torch.distributed.run as distrib_run
28 def get_launch_prefix():
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\run.py:386, in <module>
385 from torch.distributed.elastic.utils.logging import get_logger
--> 386 from torch.distributed.launcher.api import LaunchConfig, elastic_launch
389 log = get_logger()
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\launcher\__init__.py:10, in <module>
1 #!/usr/bin/env/python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
(...)
6 # This source code is licensed under the BSD-style license found in the
7 # LICENSE file in the root directory of this source tree.
---> 10 from torch.distributed.launcher.api import ( # noqa: F401
11 LaunchConfig,
12 elastic_launch,
13 launch_agent,
14 )
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\launcher\api.py:15, in <module>
14 from torch.distributed.elastic import events, metrics
---> 15 from torch.distributed.elastic.agent.server.api import WorkerSpec
16 from torch.distributed.elastic.agent.server.local_elastic_agent import LocalElasticAgent
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\agent\server\__init__.py:40, in <module>
31 from .api import ( # noqa: F401
32 ElasticAgent,
33 RunResult,
(...)
38 WorkerState,
39 )
---> 40 from .local_elastic_agent import TORCHELASTIC_ENABLE_FILE_TIMER, TORCHELASTIC_TIMER_FILE
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\agent\server\local_elastic_agent.py:19, in <module>
17 from typing import Any, Dict, Optional, Tuple
---> 19 import torch.distributed.elastic.timer as timer
20 from torch.distributed.elastic import events
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\__init__.py:44, in <module>
43 from .local_timer import LocalTimerClient, LocalTimerServer # noqa: F401
---> 44 from .file_based_local_timer import FileTimerClient, FileTimerServer, FileTimerRequest
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py:63, in <module>
52 return json.dumps(
53 {
54 "version": self.version,
(...)
59 },
60 )
---> 63 class FileTimerClient(TimerClient):
64 """
65 Client side of ``FileTimerServer``. This client is meant to be used
66 on the same host that the ``FileTimerServer`` is running on and uses
(...)
79 negative or zero signal will not kill the process.
80 """
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py:81, in FileTimerClient()
64 """
65 Client side of ``FileTimerServer``. This client is meant to be used
66 on the same host that the ``FileTimerServer`` is running on and uses
(...)
79 negative or zero signal will not kill the process.
80 """
---> 81 def __init__(self, file_path: str, signal=signal.SIGKILL) -> None:
82 super().__init__()
AttributeError: module 'signal' has no attribute 'SIGKILL'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 from transformers import (PretrainedConfig, FlaubertConfig, AutoTokenizer, FlaubertTokenizer, FlaubertWithLMHeadModel, TrainingArguments, DataCollatorForLanguageModeling) #pipeline
2 from datasets import (load_dataset, load_from_disk, concatenate_datasets, ClassLabel)
3 import pytorch_lightning as pl
File <frozen importlib._bootstrap>:1055, in _handle_fromlist(module, fromlist, import_, recursive)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1067, in _LazyModule.__getattr__(self, name)
1065 elif name in self._class_to_module.keys():
1066 module = self._get_module(self._class_to_module[name])
-> 1067 value = getattr(module, name)
1068 else:
1069 raise AttributeError(f"module {self.__name__} has no attribute {name}")
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1066, in _LazyModule.__getattr__(self, name)
1064 value = self._get_module(name)
1065 elif name in self._class_to_module.keys():
-> 1066 module = self._get_module(self._class_to_module[name])
1067 value = getattr(module, name)
1068 else:
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1078, in _LazyModule._get_module(self, module_name)
1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
-> 1078 raise RuntimeError(
1079 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1080 f" traceback):\n{e}"
1081 ) from e
RuntimeError: Failed to import transformers.models.flaubert.modeling_flaubert because of the following error (look up to see its traceback):
module 'signal' has no attribute 'SIGKILL'
```
### Expected behavior
flawless import as usual
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20007/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20006/events
|
https://github.com/huggingface/transformers/pull/20006
| 1,431,839,789
|
PR_kwDOCUB6oc5B_QzG
| 20,006
|
Fix typo in quicktour
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
Fixes typo in dataset name for the quicktour
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20006/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20006",
"html_url": "https://github.com/huggingface/transformers/pull/20006",
"diff_url": "https://github.com/huggingface/transformers/pull/20006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20006.patch",
"merged_at": 1667327436000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20005/events
|
https://github.com/huggingface/transformers/pull/20005
| 1,431,767,981
|
PR_kwDOCUB6oc5B_BQs
| 20,005
|
Fix dataset in quicktour
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This PR adds a dataset in the `Trainer` section of the Quicktour so users can successfully run the code samples (from forum feedback [here](https://discuss.huggingface.co/t/trainer-a-pytorch-optimized-training-loop-example-code/25163)).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20005/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20005",
"html_url": "https://github.com/huggingface/transformers/pull/20005",
"diff_url": "https://github.com/huggingface/transformers/pull/20005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20005.patch",
"merged_at": 1667324240000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20004/events
|
https://github.com/huggingface/transformers/pull/20004
| 1,431,674,281
|
PR_kwDOCUB6oc5B-s_1
| 20,004
|
Update object detection pipeline to use post_process_object_detection methods
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Updates the `ObjectDetectionPipeline` to use the `XXXFeatureExtractor.post_process_object_detection` methods instead of the deprecated `XXXFeatureExtractor.post_process` methods.
Postprocessing methods have been updated recently with this [PR](https://github.com/huggingface/transformers/pull/19709).
Partially fixes the hardcoded threshold issue with the inference widgets, which requires adding a threshold button to the widgets.
Fixes # 414
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20004/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20004",
"html_url": "https://github.com/huggingface/transformers/pull/20004",
"diff_url": "https://github.com/huggingface/transformers/pull/20004.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20004.patch",
"merged_at": 1667373996000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20003/events
|
https://github.com/huggingface/transformers/pull/20003
| 1,431,666,932
|
PR_kwDOCUB6oc5B-ra1
| 20,003
|
Add object detection + segmentation transforms
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint."
] | 1,667
| 1,668
| 1,668
|
COLLABORATOR
| null |
# What does this PR do?
Adds logic for processing bounding boxes and some additional transforms (`rgb_to_id`, `id_to_rgb`) needed for DETR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20003/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20003",
"html_url": "https://github.com/huggingface/transformers/pull/20003",
"diff_url": "https://github.com/huggingface/transformers/pull/20003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20003.patch",
"merged_at": 1668516604000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20002/events
|
https://github.com/huggingface/transformers/pull/20002
| 1,431,544,906
|
PR_kwDOCUB6oc5B-Q-Y
| 20,002
|
Fix the test for corrupted checkpoints in from_pretrained
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
As pointed out in #19974, there is a bug in `from_pretrained` when the model with head contains the same key as the base model, the checkpoint is then detected as corrupted. This PR fixes it and introduces a test to make sure there is no regression.
Fixes #19974
cc @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20002/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20002",
"html_url": "https://github.com/huggingface/transformers/pull/20002",
"diff_url": "https://github.com/huggingface/transformers/pull/20002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20002.patch",
"merged_at": 1667397217000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20001/events
|
https://github.com/huggingface/transformers/pull/20001
| 1,431,372,689
|
PR_kwDOCUB6oc5B9r0X
| 20,001
|
typo
|
{
"login": "WrRan",
"id": 7569098,
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WrRan",
"html_url": "https://github.com/WrRan",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"repos_url": "https://api.github.com/users/WrRan/repos",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20001/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20001",
"html_url": "https://github.com/huggingface/transformers/pull/20001",
"diff_url": "https://github.com/huggingface/transformers/pull/20001.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20001.patch",
"merged_at": 1667307894000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20000/events
|
https://github.com/huggingface/transformers/pull/20000
| 1,431,336,536
|
PR_kwDOCUB6oc5B9kJ_
| 20,000
|
Add ESMFold code sample
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"FYI, would be great to add ESM to the doc tests, to make sure this is tested.\r\n"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20000/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20000",
"html_url": "https://github.com/huggingface/transformers/pull/20000",
"diff_url": "https://github.com/huggingface/transformers/pull/20000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20000.patch",
"merged_at": 1667308873000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19999/events
|
https://github.com/huggingface/transformers/issues/19999
| 1,431,278,433
|
I_kwDOCUB6oc5VT5Nh
| 19,999
|
Some weights of BertForPreTraining were not initialized from the model checkpoint
|
{
"login": "dsaban",
"id": 98221318,
"node_id": "U_kgDOBdq9Bg",
"avatar_url": "https://avatars.githubusercontent.com/u/98221318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsaban",
"html_url": "https://github.com/dsaban",
"followers_url": "https://api.github.com/users/dsaban/followers",
"following_url": "https://api.github.com/users/dsaban/following{/other_user}",
"gists_url": "https://api.github.com/users/dsaban/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsaban/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsaban/subscriptions",
"organizations_url": "https://api.github.com/users/dsaban/orgs",
"repos_url": "https://api.github.com/users/dsaban/repos",
"events_url": "https://api.github.com/users/dsaban/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsaban/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask questions or fill the template to report a bug. Getting a warning because the checkpoint you selected does not contain all weights for your architecture is not, by itself, a bug.",
"Yes, `bert-base-uncased` only includes the weights and bias of the language modeling head, but not the next sentence prediction task, which is what the warning is telling you. In other words, it corresponds to the `BertForMaskedLM` model.\r\n\r\nTherefore closing this issue, feel free to reopen."
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run script.
### Expected behavior
load BertForPreTraining
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19999/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19998/events
|
https://github.com/huggingface/transformers/issues/19998
| 1,431,263,410
|
I_kwDOCUB6oc5VT1iy
| 19,998
|
Include better versions of a model when they are available in the model doc pages
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I am very wary about this. If you start telling on the page of a model that have been developed by Org 1 that there is a better model developed by Org 2, they will get mad and we will then have unnecessary conflicts to handle.\r\n\r\nFor the same reason we stay away from benchmarks between frameworks/hardware, I would stay away from this.",
"Point noted. \r\n\r\nBut what if the same org comes up with a better version? But I understand this creates a weird distinction which is not desirable. \r\n\r\nLeaving it open for today in case anyone has any inputs. ",
"Yeah it was just a suggestion, I wouldn't have opened an issue for this actually. \r\n\r\nI agree that this could become very opinionated (it's very subjective which model is better). We could just do it for papers that come from the same team (Swin => Swinv2), to promote upcoming work. But for models that originate from different teams, this might be hard. That's where \"evaluate on the hub\" will come into play, where people can see which models perform best on a given task. \r\n\r\n",
"Closing it for now. Should there be a need, it can easily be reopened. ",
">We could just do it for papers that come from the same team (Swin => Swinv2), to promote upcoming work. \r\n\r\nSure, why not :) for those scenarios I think it's ok to open small PRs updating the corresponding docstrings"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
@NielsRogge provided a great suggestion.
We could add a banner on top of Swin's docs (and other models where this is applicable), to indicate we have a better model now, SwinV2. The same could be done for DETR, as Conditional DETR and Deformable DETR greatly improve the convergence and AP metrics.
We can start with the following:
- [ ] Swin
- [ ] DETR
And then expand to the other models. Or else, we can start with five such models (feel free to suggest more). When reviewing model PRs to `transformers`, we would just need to be mindful about this a bit so that we can suggest it accordingly to the contributors.
Cc: @osanseviero @nateraw
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19998/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19997/events
|
https://github.com/huggingface/transformers/pull/19997
| 1,431,255,536
|
PR_kwDOCUB6oc5B9S0G
| 19,997
|
Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script and readme
|
{
"login": "mpierrau",
"id": 56202367,
"node_id": "MDQ6VXNlcjU2MjAyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/56202367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpierrau",
"html_url": "https://github.com/mpierrau",
"followers_url": "https://api.github.com/users/mpierrau/followers",
"following_url": "https://api.github.com/users/mpierrau/following{/other_user}",
"gists_url": "https://api.github.com/users/mpierrau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpierrau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpierrau/subscriptions",
"organizations_url": "https://api.github.com/users/mpierrau/orgs",
"repos_url": "https://api.github.com/users/mpierrau/repos",
"events_url": "https://api.github.com/users/mpierrau/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpierrau/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19997). All of your documentation changes will be reflected on that endpoint.",
"cc @sanchit-gandhi ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey! I'm not sure what the next step is here, like I said, this is my first PR :) Some checks failed, not sure why. All tests passed before I commited @sanchit-gandhi's suggestions, but it seems they are mainly failing due to timeouts?",
"You will need to rebase your PR on main for the tests to pass, as your branch does not have the fixes for the last release of TensorFlow.",
"Hey @mpierrau! Exciting to see that you've picked-up this PR again! Let me know if you need any final help - we're close to merging now!\r\n\r\nAs Sylvain has mentioned, you'll need to rebase onto main to fix the failing tests (see 5. in this guide: https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request, just remember to force push as detailed 😉)",
"Hey, sorry, I somehow managed to miss the force push flag anyways... I hope it still works?",
"Hey @mpierrau! Unfortunately the commit history gets messed up after a rebase + non-force push. Not to worry though! Let's open a new PR with your changes in favour of this one. You can create a new branch and copy over the relevant file (`run_pretraining_...`):\r\n```\r\ngit checkout -b new-branch-mask-time-prob\r\ngit restore --source adding-mask_time_prob-args-to-wav2vec2-pretraining-script -- /path/to/relevant/file\r\n```\r\nYou can then commit, rebase, and force push to origin to open a new PR with just the required changes.",
"Closing in favour of https://github.com/huggingface/transformers/pull/20985."
] | 1,667
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR was requested by @patrickvonplaten following my question and following discussion in the Discord ask-for-help channel under the title [Wav2vec2 - why is mask_time_prob=0.05?](https://discord.com/channels/879548962464493619/1035113782223056896)
This PR adds the arguments `mask_time_prob` and `mask_time_length` to the `examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py` script and the corresponding example use in the `README.md`.
`mask_time_prob` is a variable describing two things, depending on context:
1) the percentage of the encoded feature vector to be masked during the contrastive learning task in pre-training
2) to imitate [SpecAugment](https://arxiv.org/abs/1904.08779) during fine-tuning
In this script, we are considering it in the context of 1).
`mask_time_length` is a variable describing the length (in # of frames frames) of each applied mask. It is added for completion.
# Background
In the original [wav2vec 2.0 article](https://arxiv.org/abs/2006.11477), the variable `mask_time_prob` is set to `0.65`, which (due to overlap) results in an effective masking of approximately 49% of the feature vectors during pretraining. `mask_time_length` corresponds to the _M_ variable in the article and is set to 10 there.
However, when considering the [config file of wav2vec2-base](https://huggingface.co/patrickvonplaten/wav2vec2-base/blob/main/config.json), one finds that `mask_time_prob=0.05`. This is because this model is usually used for finetuning, and not for (continued) pretraining, and for finetuning `0.05` is a better hyperparameter value (see Appendix B of [wav2vec 2.0 article](https://arxiv.org/abs/2006.11477)). This is a bit confusing.
By considering the [config file](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) of the `wav2vec2-base-v2` model, which was used during Patricks experimentation (see the [speech-pretraining readme](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining)) one finds that indeed `mask_time_prob=0.65` was used for pretraining.
The values `0.65` and `10` are set as default values for the `DataCollatorForWav2Vec2Pretraining` class defined in the script (as this class may be extracted from the script by users), but no defaults are given in the argparser, as the argument values are also specified in the [wav2vec2-base](https://huggingface.co/patrickvonplaten/wav2vec2-base/blob/main/config.json) and [wav2vec2-base-v2](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) model configs, and if setting defaults in the argparser, the model config values would never be applied, which may be desired. Hence, the parser argument will only be relevant if it is explicitly specified as an argument when executing the script.
**I believe this PR may also lift a bigger question, which is if `mask_time_prob` should be split into two different variables to avoid confusion in the future.**
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Participants in discussion on Discord: @osanseviero @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19997/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19997/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19997",
"html_url": "https://github.com/huggingface/transformers/pull/19997",
"diff_url": "https://github.com/huggingface/transformers/pull/19997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19997.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19996/events
|
https://github.com/huggingface/transformers/pull/19996
| 1,431,063,358
|
PR_kwDOCUB6oc5B8puT
| 19,996
|
Update image_classification.mdx to link to the correct task page
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
Small fix.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19996/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19996",
"html_url": "https://github.com/huggingface/transformers/pull/19996",
"diff_url": "https://github.com/huggingface/transformers/pull/19996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19996.patch",
"merged_at": 1667303682000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19995/events
|
https://github.com/huggingface/transformers/pull/19995
| 1,430,744,740
|
PR_kwDOCUB6oc5B7mjX
| 19,995
|
[Doctest] Add configuration_deberta_v2.py
|
{
"login": "Saad135",
"id": 22683922,
"node_id": "MDQ6VXNlcjIyNjgzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saad135",
"html_url": "https://github.com/Saad135",
"followers_url": "https://api.github.com/users/Saad135/followers",
"following_url": "https://api.github.com/users/Saad135/following{/other_user}",
"gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saad135/subscriptions",
"organizations_url": "https://api.github.com/users/Saad135/orgs",
"repos_url": "https://api.github.com/users/Saad135/repos",
"events_url": "https://api.github.com/users/Saad135/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saad135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds configuration_deberta_v2.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19995/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19995",
"html_url": "https://github.com/huggingface/transformers/pull/19995",
"diff_url": "https://github.com/huggingface/transformers/pull/19995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19995.patch",
"merged_at": 1667402531000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19994/events
|
https://github.com/huggingface/transformers/pull/19994
| 1,430,572,575
|
PR_kwDOCUB6oc5B7CCL
| 19,994
|
Unpin PyTorch to test if doc builds
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The tests fail from the beginning with `Fatal Python error: Segmentation fault`, and no failed tests being collected/reported.\r\n**We still need to pin torch.** \r\n\r\nDoc build jobs pass now after the docker image being re-built last night (my best guess for the reason)\r\n"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
The doc building seg-faults since we pinned PyTorch. Using this PR to experiment.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19994/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19994",
"html_url": "https://github.com/huggingface/transformers/pull/19994",
"diff_url": "https://github.com/huggingface/transformers/pull/19994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19994.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19993/events
|
https://github.com/huggingface/transformers/pull/19993
| 1,430,545,671
|
PR_kwDOCUB6oc5B68JE
| 19,993
|
Update glossary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"I cannot review this PR as it is. You have changed the text for the existing entries and removed the links to the YouTube videos and since you did it with the reorganization, the diff makes it impossible to properly comment.\r\nYou have also removed existing entries like autoencoder model or autoregressive model.\r\n\r\nPlease focus on one thing at a time per PR (I have already told you this multiple times). For instance here, focus first on the reorg **with no other changes in existing content** and then in a second PR we can discuss text changes.",
"Sorry, I'll try to slow down and just focus on one thing at a time!\r\n\r\nI'll close this PR and open two separate ones to address the reorganization and new terms."
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
This PR adds some more computer vision and speech terms (feel free to suggest more!) and reorganizes it alphabetically (and each term can be linked to) so it's more like an actual glossary. I also edited some of the terms, like `attention mask` for length, since a glossary typically just provides a brief definition. If we want to keep the explanations, maybe I can link to the course instead.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19993/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19993",
"html_url": "https://github.com/huggingface/transformers/pull/19993",
"diff_url": "https://github.com/huggingface/transformers/pull/19993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19993.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19992/events
|
https://github.com/huggingface/transformers/issues/19992
| 1,430,362,196
|
I_kwDOCUB6oc5VQZhU
| 19,992
|
Add in-layer TF Tokenizer to BPE tokenizers
|
{
"login": "piEsposito",
"id": 47679710,
"node_id": "MDQ6VXNlcjQ3Njc5NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piEsposito",
"html_url": "https://github.com/piEsposito",
"followers_url": "https://api.github.com/users/piEsposito/followers",
"following_url": "https://api.github.com/users/piEsposito/following{/other_user}",
"gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions",
"organizations_url": "https://api.github.com/users/piEsposito/orgs",
"repos_url": "https://api.github.com/users/piEsposito/repos",
"events_url": "https://api.github.com/users/piEsposito/events{/privacy}",
"received_events_url": "https://api.github.com/users/piEsposito/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"It seems like I have to tag @n1t0, @LysandreJik because this is about the tokenizers. ",
"cc @Rocketknight1 ",
"An alternative would be adding a `as_keras_layer` (or something) method to `PreTrainedTokenizer` and create the TF BPE Tokenizer from the vocab and merges from the tokenizer. What do you think?",
"This is great! We'd been blocked by the lack of a BPE tokenizer in TF-text or Keras-NLP, as it's an extremely common tokenizer class for us. We're definitely going to explore this as soon as we can find some time.",
"@Rocketknight1 I can contribute and submit a PR, I would just like some guidance of your on where to implement such Tokenizer (and whether should we import it from Keras-NLP or copy-paste + adapt it). \r\n\r\nAs they haven't released version of the package in a while, maybe creating a base BPE Tokenizer on a `tf_tokenization_utils` file and then having the specifics of `prepare_for_tokenization` be implemented for each model. \r\n\r\nWhat do you think? I would just like to know if you folks are OK with some copy pasted code from `keras-nlp`.\r\n",
"Hi @piEsposito - good questions all round. Right now we only have one TF tokenizer in the library, the BERT tokenizer in `tf_tokenization_bert.py`. I think a good plan would be to add a single BPE tokenizer for a couple of popular models that use BPE (e.g. RoBERTa, GPT or DeBERTa). After that, we should be able to see how much code is shared and how much is model-specific and then refactor out a shared method for future tokenizers. WDYT?\r\n\r\nAlso, code copy-pasted from `keras-nlp` is fine as long as the licence allows it, but we also don't mind having `keras-nlp` as a dependency, since in-graph tokenizers will already have `tensorflow-text` as a dependency anyway.",
"@Rocketknight1 I will try doing it for T5 or GPT.\r\n\r\nAbout having `keras-nlp` as a dependency: I've opened an issue there https://github.com/keras-team/keras-nlp/issues/442 asking for them to release to Pypi a version with BPE Tokenizer, and in the meantime will try to implement it in a way that works with T5, then copy paste only if this is needed.\r\n\r\nHow does that sound?\r\n\r\nI should have something in that sense on the next week if you approve the idea.",
"Sounds perfect to me!",
"Hi! This is Chen from KerasNLP, we can release a branch specially for the BPE if you need it soon, but before that there is a concern I want to raise:\r\n\r\nI made the TF BPE with many regex hacks because tf_text uses Google re2, which does not fully match the python re. Although I tested on multiple datasets (multiple languages as well) and it worked well, I am still not 100% confident it provides the exactly same result as openAI BPE. So please make sure you have a good testing coverage before using it in production, thanks!",
"Thanks @chenmoneygithub! We have quite a lot of models with BPE tokenizers that we could probably test against.",
"Awesome! We will go ahead and make a release for BPE tokenizer then. Will update this thread when that is finished.",
"We have made a release containing the BPE tokenizer: https://pypi.org/project/keras-nlp/\r\n\r\nPlease let us know if you find any issues, thank you!",
"cc @piEsposito to the above comment! ^",
"@Rocketknight1 thank you let me get started!",
"Yeah I'm exploring it and guess what it is not as easy as I thought haha. ",
"@piEsposito That was my experience too - are you having trouble even getting the results to match for a single model?",
"@Rocketknight1 I'm having it too, which is kinda fun, because the tokens are total mismatches, but when I decode them back they are still the same as the input. I think we will have to go deep on the internals to check for the differences. ",
"Ugh, of course - there are multiple valid tokenizations for the same string. I'm not enough of a tokenizers expert to know the exact algorithms used and if they differ between the many BPE models we have.",
"@Rocketknight1 I could make them match for GPT2, should open a PR this week. Sorry for the delay, this thing was an order of magnitude harder to do that I was estimating.",
"Don't apologize at all - this is something we were struggling with too!",
"Thanks for understanding, I'll try to get at least a draft today. ",
"@Rocketknight1 after some delay I could figure out a way to make it work, even with generation. I've requested your review and, as we agreed, kept the implementation minimal for us to get a sense of the effort needed to create the in-layer TF Tokenizer for the models that use BPE.",
"@Rocketknight1 all right, that was fun. Let's do it for CLIP now and figure out how we put the `<w/>` logic inside the keras-nlp bpe tokenizer.",
"Awesome, good luck!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey don't stale us buddy. We're just putting out some fires and enjoying the holidays. ",
"What is the current status of this? I saw the GPT2 Tokeniser was added - any progress on any others? I am happy to contribute, though will need help wth some of the details.",
"Hi @calvingiles, we're not working on others internally but we'd be happy for you to try adding one! Basically, if you want to attempt this, I'd suggest picking a model, then seeing if you can replicate its tokenizer with a Tensorflow Text tokenizer class like `BytePairTokenizer`. If you can, then it's quite easy - take a look at the `tokenization_gpt2_tf.py` file in `transformers`. Most of that can be copied for a new model you want to add. There are probably several models that use standard BytePair tokenizers where that file could be copied with barely any changes to enable TF tokenization for them - we're happy to support you if you want to try looking around and making PRs for them!"
] | 1,667
| 1,703
| null |
CONTRIBUTOR
| null |
### Feature request
As what we have with `TFBertTokenizer`, but with models that use Byte Pair Encoding (e.g. `TFT5Tokenizer`, `TFClipTokenizer`) etc...
They were implemented in `keras-nlp` (https://github.com/keras-team/keras-nlp/pull/389) and we can now bring them here.
### Motivation
With that feature we will be able to serve almost every model with TF Serving, which will make it much easier to serve models, as we won't have to write handlers and custom servers.
Having TF BPE Tokenizers is (I think) the last barrier to make `transformers` fully TF Serving-compliant.
### Your contribution
I can submit a PR, but there are a huge lot of models for which we would need to do that, so I expect a large number of subtasks if you decide to go for it.
Also, as `keras-nlp` implemented it (https://github.com/keras-team/keras-nlp/pull/389), should we copy-paste the code for each tokenizer or import from `keras-nlp`, while keeping the reference to their repo?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19992/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/19991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19991/events
|
https://github.com/huggingface/transformers/pull/19991
| 1,430,191,851
|
PR_kwDOCUB6oc5B5vd6
| 19,991
|
Cached sin and cos matrices for rotary at GPT-J model initialization for faster generation
|
{
"login": "kurumuz",
"id": 20085594,
"node_id": "MDQ6VXNlcjIwMDg1NTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20085594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kurumuz",
"html_url": "https://github.com/kurumuz",
"followers_url": "https://api.github.com/users/kurumuz/followers",
"following_url": "https://api.github.com/users/kurumuz/following{/other_user}",
"gists_url": "https://api.github.com/users/kurumuz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kurumuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kurumuz/subscriptions",
"organizations_url": "https://api.github.com/users/kurumuz/orgs",
"repos_url": "https://api.github.com/users/kurumuz/repos",
"events_url": "https://api.github.com/users/kurumuz/events{/privacy}",
"received_events_url": "https://api.github.com/users/kurumuz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19991). All of your documentation changes will be reflected on that endpoint.",
"Mmm, now it looks like the tests are not running for some reason. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? And/or push an empty commit?",
"Hi @kurumuz Thank you for the PR. I left a few comments, especially for the shape and the case where `self.rotary_dim = None`.\r\nIt is not very clear to me why the (changed) shapes of `sin` and `cos` doesn't cause any issue.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
Makes generations faster by caching sincos matrices for rotary for the max sequence length at model initialisation so it won't be done every other forward.
around ~%15 percent speedup overall
Tested on A100-SXM4-80GB
Benchmarks:
1 token forward average runtime out of 100 iterations with cached sin cos:
0.023421470640283642s
1 token forward average runtime out of 100 iterations without cached sin cos:
0.02738137301433869s
10 generations with 1 token context and 40 tokens generated without sincos caching:
1.410405054409057s
10 generations with 1 token context and 40 tokens generated with sincos caching:
1.2199230638332665s
Test script:
```
torch.manual_seed(123)
input_ids = torch.randint(0, 100, (1, 1)).cuda().long()
iterations = 11
with torch.no_grad():
for i in range(iterations):
if i == 1:
#start counting time after tier 1 due to pytorch warming up
t = time.perf_counter()
outputs = model.generate(input_ids, max_length=input_ids.shape[-1] + 50, min_length=input_ids.shape[-1] + 50, do_sample=True, use_cache=True)
#outputs = model.forward(input_ids)
print((time.perf_counter() - t) / (iterations - 1))
```
Models:
- GPT-J
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19991/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19991",
"html_url": "https://github.com/huggingface/transformers/pull/19991",
"diff_url": "https://github.com/huggingface/transformers/pull/19991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19991.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19990/events
|
https://github.com/huggingface/transformers/pull/19990
| 1,430,178,352
|
PR_kwDOCUB6oc5B5skO
| 19,990
|
[EncoderDecoderModel] Add support for gradient checkpointing
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds gradient checkpointing support for EncoderDecoderModel in PyTorch.
As requested on the forum: https://discuss.huggingface.co/t/feature-request-gradient-checkpointing-for-encoderdecodermodel/25278
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19990/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19990/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19990",
"html_url": "https://github.com/huggingface/transformers/pull/19990",
"diff_url": "https://github.com/huggingface/transformers/pull/19990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19990.patch",
"merged_at": 1667237838000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19989/events
|
https://github.com/huggingface/transformers/pull/19989
| 1,430,151,240
|
PR_kwDOCUB6oc5B5mv9
| 19,989
|
Pin torch to < 1.13 temporarily
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Failed test is irrelevant. Merge now.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19989). All of your documentation changes will be reflected on that endpoint."
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Pin torch to < 1.13 temporarily, as it causes strange failures (segmentation fault) on CircleCI.
Evidence could be found
with torch 1.12.1
https://app.circleci.com/pipelines/github/huggingface/transformers/50619/workflows/5b665ba2-5f45-4b61-9e08-de6c8a2349cd
with torch 1.13.0
https://app.circleci.com/pipelines/github/huggingface/transformers/50621/workflows/8a137f60-2e66-48fd-aeb6-1a8d49369d4c
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19989/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19989",
"html_url": "https://github.com/huggingface/transformers/pull/19989",
"diff_url": "https://github.com/huggingface/transformers/pull/19989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19989.patch",
"merged_at": 1667236972000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19988/events
|
https://github.com/huggingface/transformers/pull/19988
| 1,430,133,106
|
PR_kwDOCUB6oc5B5i3c
| 19,988
|
Tranformers documentation translation to Italian #17459
|
{
"login": "draperkm",
"id": 80494835,
"node_id": "MDQ6VXNlcjgwNDk0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/80494835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/draperkm",
"html_url": "https://github.com/draperkm",
"followers_url": "https://api.github.com/users/draperkm/followers",
"following_url": "https://api.github.com/users/draperkm/following{/other_user}",
"gists_url": "https://api.github.com/users/draperkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/draperkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/draperkm/subscriptions",
"organizations_url": "https://api.github.com/users/draperkm/orgs",
"repos_url": "https://api.github.com/users/draperkm/repos",
"events_url": "https://api.github.com/users/draperkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/draperkm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 17459
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19988/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19988",
"html_url": "https://github.com/huggingface/transformers/pull/19988",
"diff_url": "https://github.com/huggingface/transformers/pull/19988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19988.patch",
"merged_at": 1667236755000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19987/events
|
https://github.com/huggingface/transformers/pull/19987
| 1,430,116,616
|
PR_kwDOCUB6oc5B5fcQ
| 19,987
|
[Don't merge] Debug CircleCI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
debug circleci
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19987/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19987",
"html_url": "https://github.com/huggingface/transformers/pull/19987",
"diff_url": "https://github.com/huggingface/transformers/pull/19987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19987.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19986/events
|
https://github.com/huggingface/transformers/pull/19986
| 1,430,024,347
|
PR_kwDOCUB6oc5B5L-N
| 19,986
|
[ASR Examples] Update 'tasks' for model card
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,687
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
The task 'automatic-speech-recognition' was added to the model card creator in #19985. This PR updates all the ASR examples scripts accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19986/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19986",
"html_url": "https://github.com/huggingface/transformers/pull/19986",
"diff_url": "https://github.com/huggingface/transformers/pull/19986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19986.patch",
"merged_at": 1667235017000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19985/events
|
https://github.com/huggingface/transformers/pull/19985
| 1,430,013,859
|
PR_kwDOCUB6oc5B5Jsj
| 19,985
|
[modelcard] Update for ASR
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,687
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Updates the modelcard to include ASR in the task mapping and task-tag-to-name mapping, and the WER in the metric tags.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19985/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19985",
"html_url": "https://github.com/huggingface/transformers/pull/19985",
"diff_url": "https://github.com/huggingface/transformers/pull/19985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19985.patch",
"merged_at": 1667234998000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19984/events
|
https://github.com/huggingface/transformers/pull/19984
| 1,430,009,665
|
PR_kwDOCUB6oc5B5Ixs
| 19,984
|
Improve model tester
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Just FYI\r\n\r\nI updated this PR for `CANINE` https://github.com/huggingface/transformers/pull/19984/commits/bc83147623cb6441fed78d5ca9c46b114791706b and `ESMFold` https://github.com/huggingface/transformers/pull/19984/commits/0f8e4061f884fee859e63bf4b5d4bd0b8365906a\r\n\r\nDon't really think you will reject these changes, but just in case!",
"Still LGTM!"
] | 1,667
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Some model testers have `__init__` like
```python
def __init__(
self,
parent,
)
```
and others accept many more arguments to customize them.
- This PR makes them accept arguments, so we have uniform style.
- This is also necessary to make `tiny model creation` to give **more correct** outputs (i.e. config/model/processor files), where `vocab_size` need to be sync between the tiny config (via model testers) and the converted (smaller) tokenizers.
I think for the review, you can just look the change in a single model test file :-)
#### TODO (in another PR 🙏 ): same change for some TF/Flax model testers
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19984/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19984",
"html_url": "https://github.com/huggingface/transformers/pull/19984",
"diff_url": "https://github.com/huggingface/transformers/pull/19984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19984.patch",
"merged_at": 1667407124000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.